With the 2024 US presidential elections around the corner, California Governor Gavin Newsom recently signed three new bills to tackle the use of artificial intelligence (AI) in creating misleading images and videos for political advertisements. The development comes at a time when there is widespread concern among Americans about AI’s potential to spread misinformation during the upcoming elections.

Earlier this month, singer Taylor Swift, who endorsed US Vice-President Kamala Harris on her Instagram account, wrote about the dangers of AI and how her fake images were created to ‘falsely’ endorse Donald Trump.

View this post on Instagram

A post shared by Taylor Swift (@taylorswift)

X’s AI chatbot Grok too has been in the limelight for pushing misinformation about elections and allowing users to make life-like AI-generated images (deepfakes) of elected officials in ethically questionable situations.

A recent Pew Research Center survey found that around 57 per cent of respondents were worried about AI being used to create false information, but only 20 per cent trusted major tech companies to prevent its misuse. This unease is shared by Republicans and Democrats alike, though views on AI’s impact vary by age group.

https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/

The survey found that over 39 per cent of Americans said AI would be used mostly for negative purposes during the presidential campaign. It also revealed that 57 per cent of US adults – including nearly identical shares of Republicans and Democrats – were extremely or very concerned that people or organisations seeking to influence the election will use AI to create and distribute fake or misleading information about the candidates and campaigns.

Only 20 per cent of respondents in the study said that they were very or somewhat confident that the social media companies would prevent their platforms from being misused.

Another survey by online platform Hosting Advice found that 58 per cent of the surveyed adults had been misled by AI-generated fake news. Seventy per cent of the survey respondents were worried about how fake news might affect the upcoming election.

https://www.hostingadvice.com/studies/ai-generated-fake-news-impact/

For further insights into how AI-driven misinformation could affect the upcoming US elections, The Indian Express spoke to a few AI experts.

‘AI literacy is the key’

Alex Mahadevan, who is the director of MediaWise, a nonpartisan, nonprofit initiative of The Poynter Institute that empowers people with the skills to identify misinformation, says generative AI poses 2 significant risks during the 2024 US elections.

“First, the fact that anyone can use the paranoia about generative AI to say a real image is synthetic. So you might have a politician say a compromising photo of themselves was actually created through artificial intelligence. This makes it hard for voters to know what to trust. It’s literally getting near impossible to believe your eyes online. Second, it’s the ability for anyone to become a one-person troll farm. They can use generative AI to churn out tons of political propaganda and memes, text, images or audio to support their preferred candidate or denigrate an opponent,” he says.

How can political campaigns and advocacy groups counteract AI misinformation to protect electoral integrity? AI literacy is the key, says Mahadevan, who is also faculty at Poynter, a nonprofit media institute and newsroom that provides fact-checking, media literacy and journalism ethics training.

“Trying to make sure the public is educated on what generative AI tools are capable of and what they are not capable of… teaching audiences how to do things like a reverse image search so they can determine the provenance of an image or video is how political campaigns and advocacy groups can counteract the spread of generative AI. I think governing bodies should at the very least demand transparency about algorithms behind these AI tools,” he adds.

‘Detect and flag fakes immediately’

Eliot Higgins, Director, Bellingcat Productions BV, an independent investigative collective of researchers, investigators and citizen journalists, says that one of the biggest risks with generative AI is the creation of deepfakes.

“These are fake videos or audio clips that look and sound incredibly real, showing politicians saying or doing things they never actually did. It’s kind of scary how convincing they can be, and they have the potential to seriously mislead voters. Plus, AI can churn out loads of fake news articles and social media posts in no time, making it easier to spread misinformation far and wide, something we have seen on various fake news sites used to spread false stories, in the last year in particular. All this can really skew voter perception because people might base their opinions on things that are not true,” he says.

“On how campaigns and advocacy groups can fight back, I think a multi-pronged approach is best. They could invest in tech that helps detect and flag AI-generated fakes early on. Educating the public is huge too—if more people know about deepfakes and how to spot them, it will lessen their impact. Having teams ready to quickly address and debunk false information can make a big difference as well. Collaborating with social media platforms to remove harmful content quickly is also key. And by being transparent and encouraging supporters to fact-check information, they can build more trust,” Higgins adds.

Regulatory bodies also have a part to play, says Higgins. “Setting clear guidelines on how AI can be used in political advertising would help. For example, requiring that any AI-generated content is clearly labelled so people know what they are seeing. Holding people accountable if they intentionally spread misinformation is important too—it could deter bad actors. Working with tech companies to improve detection methods would be beneficial, and updating laws to keep pace with technology will help ensure they are prepared to handle new challenges,” he adds.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.