Image
Hands on a computer keyboard with simulated holographic images floating above representing aspects of artificial intelligence

AI will play a role in election misinformation. Experts are trying to fight back

© Khanchit Khirisutchalual - iStock-1515913422

Paige Gross

(Colorado Newsline) In June, amid a bitterly contested Republican gubernatorial primary race, a short video began circulating on social media showing Utah Governor Spencer Cox purportedly admitting to fraudulent collection of ballot signatures.

The governor, however, never said any such thing and courts have upheld his election victory.

Image
Roadside-style sign with the words "Elections Ahead"

© iStock - gguy44

The false video was part of a growing wave of election-related content created by artificial intelligence. At least some of that content, experts say, is false, misleading or simply designed to provoke viewers.

AI-created likenesses, often called “deepfakes,” have increasingly become a point of concern for those battling misinformation during election seasons. Creating deepfakes used to take a team of skilled technologists with time and money, but recent advances and accessibility in AI technology have meant that nearly anyone can create convincing fake content.

“Now we can supercharge the speed and the frequency and the persuasiveness of existing misinformation and disinformation narratives,” Tim Harper, senior policy analyst for democracy and elections at the Center for Democracy and Technology, said.

AI has advanced remarkably since just the last presidential election in 2020, Harper said, noting that OpenAI’s release of ChatGPT in November 2022 brought accessible AI to the masses.

About half of the world’s population lives in countries that are holding elections this year. And the question isn’t really if AI will play a role in misinformation, Harper said, but rather how much of a role it will play.

How can AI be used to spread misinformation?

Though it is often intentional, misinformation caused by artificial intelligence can sometimes be accidental, due to flaws or blindspots baked into a tool’s algorithm. AI chatbots search for information in the databases they have access to, so if that information is wrong, or outdated, it can easily produce wrong answers.

OpenAI said in May that it would be working to provide more transparency about its AI tools during this election year, and the company endorsed the bipartisan Protect Elections from Deceptive AI Act, which is pending in Congress.

“We want to make sure that our AI systems are built, deployed, and used safely,” the company said in the May announcement. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

Image
PROMO Politician - Utah Governor Spencer Cox

Utah Governor Spencer Cox

Poorly regulated AI systems can lead to misinformation. Elon Musk was recently called upon by several secretaries of state after his AI search assistant Grok, built for social media platform X, falsely told users Vice President Kamala Harris was ineligible to appear on the presidential ballot in nine states because the ballot deadline had passed. The information stayed on the platform, and was seen by millions, for more than a week before it was corrected.

“As tens of millions of voters in the U.S. seek basic information about voting in this major election year, X has the responsibility to ensure all voters using your platform have access to guidance that reflects true and accurate information about their constitutional right to vote,” reads the letter signed by the secretaries of state of Washington, Michigan, Pennsylvania, Minnesota and New Mexico.

Generative AI impersonations also pose a new risk to the spread of misinformation. In addition to the fake video of Cox in Utah, a deepfake video of Florida Governor Ron DeSantis falsely showed him dropping out of the 2024 presidential race.

Some misinformation campaigns happen on huge scales like these, but many others are more localized, targeted campaigns. For instance, bad actors may imitate the online presence of a neighborhood political organizer, or send AI-generated text messages to listservs in certain cities. Language minority communities have been harder to reach in the past, Harper said, but generative AI has made it easier to translate messages or target specific groups.

While most adults are aware that AI will play a role in the election, some hyperlocal, personalized campaigns may fly under the radar, Harper says.

For example, someone could use data about local polling places and public phone numbers to create messages specific to you. They may send a text the night before election day saying that your polling location has changed from one spot to another, and because they have your original polling place correct, it doesn’t seem like a red flag.

“If that message comes to you on WhatsApp or on your phone, it could be much more persuasive than if that message was in a political ad on a social media platform,” Harper said. “People are less familiar with the idea of getting targeted disinformation directly sent to them.”

Verifying digital identities

The deepfake video of Cox helped spur a partnership between a public university and a new tech platform with the goal of combating deepfakes in Utah elections.

From July 2024, through Inauguration Day in January 2025, students and researchers at the Gary R. Herbert Institute for Public Policy and the Center for National Security Studies at Utah Valley University will work with SureMark Digital. Together, they’ll verify digital identities of politicians to study the impact AI-generated content has on elections.

Through the pilot program, candidates seeking one of Utah’s four congressional seats and the open senate seat will be able to authenticate their digital identities at no cost through SureMark’s platform, with the goal of increasing trust in Utah’s elections.

Image
PROMO Miscellaneous - Terrorism Crime Law Enforcement Person Criminal People - iStockbaramee2554

© iStockbaramee2554

Brandon Amacher, director of the Emerging Tech Policy Lab at UVU, said he sees AI playing a similar role in this election as the emergence of social media did in the 2008 election — influential but not yet overwhelming.

“I think what we’re seeing right now is the beginning of a trend which could get significantly more impactful in future elections,” Amacher said.

In the first month of the pilot, Amacher said, the group has already seen how effective these simulated video messages can be, especially in short-form media like TikTok and Instagram Reels. A shorter video is easier to fake, and if someone is scrolling these platforms for an hour, a short clip of misinformation likely won’t get very much scrutiny, but it could still influence your opinion about a topic or a person.

SureMark Chairman Scott Stornetta explained that the verification platform, which rolled out in the last month, allows a user to acquire a credential. Once that’s approved, the platform goes through an authorization process of all of your published content using cryptographic techniques that bind the identity of a person to the content that features them. A browser extension then identifies to users if content was published by you or an unauthorized actor.

I think what we’re seeing right now is the beginning of a trend which could get significantly more impactful in future elections.

– Brandon Amacher, of Utah Valley University

The platform was created with public figures in mind, especially politicians and journalists who are vulnerable to having their images replicated. Anyone can download the SureMark browser extension to see accredited content across different media platforms, not just those that get accredited. Stornetta likened the technology to an X-ray.

“If someone sees a video or an image or listens to a podcast on a regular browser, they won’t know the difference between a real and a fake,” he said. “But if someone that has this X-ray vision sees the same documents in their browser, they can click on a button and basically find out whether it’s a green check or red X.”

The pilot program is currently working to credential the state’s politicians, so it will be a few months before they start to glean results, but Justin Jones, the executive director of the Herbert Institute, said that every campaign they’ve connected with has been enthusiastic to try the technology.

“All of them have said we’re concerned about this and we want to know more,” Jones said.

What’s the motivation behind misinformation?

Lots of different groups with varying motivations can be behind misinformation campaigns, Michael Kaiser, CEO of Defending Digital Campaigns, told States Newsroom.

There is sometimes misinformation directed at specific candidates, like in the case of Governors Cox and DeSantis’ deepfake videos. Campaigns around geopolitical events, like wars, are also common to sway public opinion.

Image
Open hand facing up and glowing slightly from the palm with the letters 'AI' floating above

© Shutthiphong Chandaeng - iStock-1452604857

Russia’s influence on the 2016 and 2020 elections is well-documented, and efforts will likely continue in 2024, with a goal of undermining U.S. support of Ukraine, a Microsoft study recently reported.

There’s sometimes a monetary motivation to misinformation, Amacher said, as provocative, viral content can turn into payouts on platforms that pay users for views.

Kaiser, whose work focuses on providing cybersecurity tools to campaigns, said that while interference in elections is sometimes the goal, more commonly, these people are trying to cause a general sense of chaos and apathy toward the elections process.

“They’re trying to divide us at another level,” he said. “For some bad actors, the misinformation and disinformation is not about how you vote. It’s just that we’re divided.”

It’s why much of the AI-generated content is inflammatory or plays on your emotions, Kaiser said.

“They’re trying to make you apathetic, trying to make you angry, so maybe you’re like, ‘I can’t believe this, I’m going to share it with my friends,’” he said. “So you become the platform for misinformation and disinformation.”

Strategies for stopping the spread of misinformation

Understanding that emotional response and eagerness to share or engage with the content is a key tool to slowing the spread of misinformation. If you’re in that moment, there’s a few things you can do, the experts said.

First, try to find out if an image or sound bite you’re viewing has been reported elsewhere. You can use reverse image search on Google to see if that image is found on reputable sites, or if it’s only being shared by social media accounts that appear to be bots. Websites that fact check manufactured or altered images may point you to where the information originated, Kaiser said.

If you’re receiving messages about election day or voting, double check the information online through your state’s voting resources, he added.

Image
Pile of red, white, and blue lapel pins with the word "Vote"

© iStock

Adding two-factor authentication on social media profiles and email accounts can help ward off phishing attacks and hacking, which can be used to spread misinformation, Harper said.

If you get a phone call you suspect may be AI-generated, or is using someone’s voice likeness, it’s good to confirm that person’s identity by asking about the last time you spoke.

Harper also said that there’s a few giveaways to look out for with AI-generated images, like an extra finger or distorted ear or hairline. AI has a hard time rendering some of those finer details, Harper said.

Another visual clue, Amacher said, is that deepfake videos often feature a blank background, because busy surroundings are harder to simulate.

And finally, the closer we are to the election, the likelier you are to see misinformation, Kaiser said. Bad actors use proximity to the election to their advantage — the closer you are to election day, the less time your misinformation has to be debunked.

Technologists themselves can take some of the onus of misinformation in the way they build AI, Harper said. He recently published a summary of recommendations for AI developers with suggestions for best practices.

The recommendations included refraining from releasing text-to-speech tools that allow users to replicate the voices of real people, refraining from the generation of realistic images and videos of political figures and prohibiting the use of generative AI tools for political ads.

Harper suggests that AI tools disclose how often a chatbot’s training data is updated relating to election information, develop machine-readable watermarks for content and promote authoritative sources of election information.

Some tech companies already voluntarily follow many of these transparency best practices, but much of the country is following a “patchwork” of laws that haven’t developed at the speed of the technology itself.

A bill prohibiting the use of deceptive AI-generated audio or visual media of a federal candidate was introduced in congress last year, but it has not been enacted. Laws focusing on AI in elections have been passed on a state level in the last two years, though, and primarily either ban messaging and images created by AI or at least require specific disclaimers about the use of AI in campaign materials. Colorado candidates need to disclose if they use generative artificial intelligence in their campaign communications under a bill signed into law this year.

But for now, these young tech companies that want to do their part in stopping or slowing the spread of misinformation can seek some direction from the CDT report or pilot programs like UVU’s.

“We wanted to take a stab at creating a kind of a comprehensive election integrity program for these companies,” Harper said. “understanding that unlike the kind of legacy social media companies, they’re very new and quite young and have no time or kind of the regulatory scrutiny required to have created strong election integrity policies in a more systematic way.”


Colorado Newsline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Colorado Newsline maintains editorial independence. Contact Editor Quentin Young for questions: info@coloradonewsline.com. Follow Colorado Newsline on Facebook and X.