How ChatGPT Can Be Used in the Information War

By Tina Mulqueen Tina Mulqueen has been verified by Muck Rack's editorial team
Published on February 23, 2023

My company has a unique vantage point as a communications firm that serves media and technology brands, and sometimes where they both meet in adtech. That position illuminated how the emergence of social media and the algorithmic digital advertising model has indelibly impacted society.

Seemingly overnight, platforms that earned money through monetizing user attention became the primary brokers of our news. The safeguards that we had established in traditional journalism disappeared, and the new model came with an inherent conflict of interest. Since user attention was a commodity, the headlines that garnered attention were rewarded. The new structure favored sensational and provocative headlines over in-depth, fact-based journalism.

Ramifications have been grave. There are unintended consequences like political polarization and heightened levels of depression and anxiety in some groups. There are also more overt nefarious acts by tech-savvy communities that exploit algorithms and perpetuate false stories, or fake news, to manipulate the public into supporting an agenda (think Cambridge Analytica).

So, when I learned that ChatGPT and other generative AI that use language models are being used by news publishers and content creators to draft content, I was concerned.

Let’s take a look at the technology, what’s at stake in the media, and how we can protect ourselves from both unanticipated consequences and nefarious actors.

Why would publishers and news outlets use ChatGPT?

Generative AI like ChatGPT  is so effective because of the AI language model that it uses which analyzes a vast dataset in order to formulate a predictive response. In the case of Bing Chat, which is powered by the technology behind ChatGPT, that data set is the entire internet.

If you spend a few minutes with the technology, you’ll see that responses to queries are surprisingly human-like and many of the language model predictions provide accurate results to queries because of the immense data that’s available. The app can handle all sorts of queries, from research and writing to coding.

The technology can write a full article in just seconds, which makes it especially lucrative for publishers and other content creators for its time and money-saving effects and to support SEO efforts. Since it saves time, it allows publishers to respond quickly to trending keywords and news stories. This effort is likely to be rewarded with views by digital algorithms. Since views are tied to monetization in our digital ad model, this allows the publisher to potentially increase its demand and revenue.

Because it saves time, it also saves money – it can limit the time copywriters and researchers need to produce an article or consolidate editing roles and replace some individuals entirely. This offers labor-savings at a time when many publishers are cutting costs.

Some publishers are already using tools to track trending topics and employing ChatGPT or other generative AI to draft content as part of the outlet’s SEO and content strategy, and experiencing great results.

(One caveat: users should tread carefully when using ChatGPT to produce content for SEO. Particularly since Google competes with ChatGPT, the search engine is likely to penalize content created by ChatGPT under its spam policy. It’s unclear how Google will handle content from its own competitor to Bing Chat, Bard.)

Why should we worry?

While the benefits to publishers are obvious, there are a few significant problems with the technology – especially when we consider using these programs to draft content for news sites.

To start, the responses from ChatGPT don’t cite sources of information, which is a foundation of journalism. In fact, the responses to queries may not even be accurate, because of the nature of the model – ChatGPT will respond to a query even if accurate information is unavailable simply by predicting words that are likely to come next. This means the results are sometimes nonsense, and the AI doesn’t do any fact-checking.

In the chase of Bing’s ChatGPT-powered search, sources are cited; however, the notion of providing one well-formulated answer to a search query in lieu of numerous recommended articles may decrease the amount of traffic to publishing websites and decrease the amount of content vetting done by individual users.

To make matters worse, the mere existence of the technology and the notion that some publishers are already using it to draft articles will put pressure on the ecosystem to utilize the technology which exacerbates the problems and incentives that are not aligned with public interest.

Generative AI has the potential to worsen the problem of fake news by becoming the source of it and also serving pre-established false narratives as query results. Moreover, it can exacerbate a troubling trend of decreasing the number of true journalists engaging in thoughtful reporting and further stacking incentives against them in favor of algorithmic-based content discovery driven by ads.

Unchecked, this is a serious threat to the integrity of our already muddy media ecosystem. More false news will correlate to more radicalized content and even less trust in reputable news sources which are already losing the battle for attention and support in an ecosystem that favors sensationalistic pundits and outlets.

Moreover, what we’ve seen with other algorithm-based models is the emergence of businesses that serve to exploit the model for monetary gain. Especially when we consider generative AI’s role in search and product or individual discovery, this is a big business and black hat actors will certainly pop up without a clear safety infrastructure.

These issues can have serious consequences. Without an environment that supports fact-based journalism, our very democracy is at stake. Without checks and balances on how algorithms can be exploited for profit, we can exacerbate geopolitical issues and inequality.

What do we do?

In the specific case of ChatGPT, it’s likely that Google will create some safeguards around showing content generated by the AI in search results. Not only does Google have a spam policy that specifically penalizes auto-generated content, but it has business incentives to ensure its competitive advantage over ChatGPT since they will compete in the search space.

But safeguards created by tech giants should be understood in the context of their motivations: to fortify their respective competitive positions in the market. Legislation should fill the gap to ensure these safeguards are in line with public safety.

Specifically, both tech companies and publishers should be required to implement checks and balances to ensure the integrity of content produced by or developed with generative AI.

Moreover, following the example of a handful of states — including Illinois, New Jersey, Washington, Colorado and Texas — we should expand upon legislation that requires media literacy education in schools so that consumers of content can get better at analyzing the reputability of sources.

AI has the potential to create unprecedented value for its users, but the time to encode our human values in the technology is now. We need to be intentional about how AI is deployed and think through its consequences so that we can create incentives that are aligned with a human-centered future.

By Tina Mulqueen Tina Mulqueen has been verified by Muck Rack's editorial team

Tina is a Staff Writer at Grit Daily. Based in Washington, she speaks and writes regularly on sustainable marketing and entrepreneurship practices. She’s carved out a niche in digital media and entertainment, working with brands as CBS, Vanity Fair, Digital Trends and Marie Claire; and at such events as The Academy Awards, the Billboard Music Awards, the Emmy's, and the BAFTAs. Her writing has been featured in a regular column on Forbes, Thrive Global, Huffington Post, Elite Daily and various other outlets. For her work, she’s been recognized in Entrepreneur, Adweek, and more. Tina also founded a non-profit, Cause Influence, to expand the reach of important social causes. Under her non-profit, she takes on pro bono clients with an emphasis on equality and representation. She also founded and manages a media company called Et al. Meaning “and others,” Et al.’s mission is telling the stories of underrepresented individuals and communities.

Read more

More GD News