The views expressed by contributors are their own and not the view of The Hill

AI could create a disinformation nightmare in the 2024 election

Getty Images/stock

When social media first burst onto the political scene in the early 2010s, it was hailed as a “liberation technology” that would accelerate the spread of democracy around the world. Yet in the aftermath of the 2016 U.S. elections, experts instead asked, “Can democracy survive the internet?”

The speed at which social media turned from savior to spoiler of democracy in less than five years was head-spinning. The rise of hate speechecho chambers, filter bubbles and, perhaps most of all, the spread of false information (aka “fake news,” misinformation, disinformation, etc.) online led to serious reevaluations of the technology’s relationship to politics.

Enter ChatGPT, which took less than six months to go from a marvel of technological sophistication to quite possibly the next great threat to democracy. While many would-be threats have been ascribed to the rise of Large Language Model Chatbots (of which ChatGPT is the preeminent example, albeit one of many), in terms of politics, the primary concern is that ChatGPT — and other forms of content-generating AI — are going to turbo-charge the spread of political misinformation ahead of the 2024 U.S. elections.

There is good reason to be concerned. In the aftermath of the 2016 elections, misinformation became both an object of concern and inextricably linked to social media, because social media drove down the cost of spreading misinformation. Misinformation has long been part of the political world in both democratic and non-democratic political systems. However, spreading misinformation traditionally required real resources, such as access to a printing press centuries ago or, in the modern era, access to print media, radio or television.

Social media changed this calculation forever, creating an environment in which anyone could share information that — at least in theory — had the potential to go viral and reach millions of people at no cost. There were economic benefits from producing such viral content. And even when content did not go viral, it could still be seen by many in one’s online networks.

And yet the content that would hypothetically go viral still needed to be produced by someone. Even if it had become easier than ever to spread fake news stories, someone still had to write those stories or Photoshop the pictures in them.

In the last six months, however, we have reached a new milestone. AI can now write high-quality text and produce high-quality images. Video is likely not too far behind.

In other words, just as social media reduced barriers to the spread of misinformation, AI has now reduced barriers to the production of misinformation. And it is exactly this combination that should have everyone concerned.

When considering AI-fueled misinformation, it is useful to distinguish between AI-generated images and AI-generated text. Perhaps counterintuitively, images may actually be easier to address. One of the great problems in addressing text or article-based misinformation is that acting on misinformation — by removing it, down-weighting it in feeds, or attaching various types of warning labels — requires someone (or some algorithm) to decide what is true.

However, if AI-generated images can be labeled as such during the production process using watermarks or metadata that are either unalterable, or require significant effort and skill to alter, then it is possible to avoid such debates over who gets to say what is or is not real.

Hopefully, if such metadata can trigger automatic labeling when images are displayed online (as presented in social media posts, search engine results, etc.), then we might be able to establish a very different set of expectations among the public as to what is real or fake as compared to previous efforts to address text-based forms of misinformation.

Of course, AI also reduces the costs of producing disinformation in the form of text, which will be much more difficult to label in the same manner using metadata, because text can simply be cut and pasted from one program to another. Therefore, the detection of AI-generated text from chatbots such as ChatGPT will have to be orchestrated by other forms of AI, opening up more room for ambiguity, and, almost certainly, greater opportunities for claims of bias in the process of doing so.

Crucially, whether misinformation is produced by AI or by human beings, social media platforms will remain the means by which it spreads.

There are reasons to be optimistic. We may obtain new tools for detecting AI-generated text and images as we head into the 2024 election season. Following longstanding patterns of the cat-and-mouse dynamics of political advantages from technological developments, we will, though, still be dependent on the decisions of a small number of high-reach platforms. Once again, the need for transparency, as we see in other emerging regulations, will be paramount. The EU Digital Services Act provides a model by which a number of the platforms are already reporting the actions they are taking with respect to false content. 

Ironically, however, the decision of how to deploy these tools, as well as what to report,  will likely be just as politically charged as the content they are designed to identify.  Moreover, as the cost of such tools grows, it may provide yet another barrier to new entrants in the social media space, which is still another way that AI and social media are likely to interact in the future.

Joshua A. Tucker is Senior Geopolitical Risk Advisor at Kroll and Co-Director of the New York University Center for Social Media and Politics.

Tags AI Artificial Intelligence disinformation

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Opinion News

See All
See all Hill.TV See all Video

Most Popular

Load more