How deepfake AI 'nightmare' could manipulate voters before presidential election

Imagine this: It’s election eve, Nov. 4, and a disturbing video is circulating on all the major social media platforms (X, Instagram, TikTok, and Facebook.) It’s viral, and there’s nothing anyone can do about it. 

In this scenario, the video depicts a presidential candidate disparaging minorities, women, the handicapped and residents who live in the South. It’s controversial, lewd, shocking and also – not real. 

Nevertheless, the visual is a deepfake that could swing the election in favor of the other candidate. 

Robert Wiseman, president of Public Citizen, a nonprofit organization advocating for consumer rights, described this potential situation during a recent episode of FOX 5’s On The Hill as a "nightmare scenario."

"They can’t really refute it because people saw them doing this thing that, in fact, they didn’t do, and it swings an election," Wiseman said.

Related

Taylor Swift's endorsement of Harris makes headlines but her AI concerns are real

Moments after Wednesday's presidential debate, superstar artist Taylor Swift came out with her most anticipated new release since "The Tortured Poets Department:" an endorsement.

"The technology is getting better. Almost day by day, the audio technology is near perfect. If you heard yourself, you probably couldn't tell if it was a deepfake unless you played it slow," he added.

It's a situation not out of the realm of possibility. A similar incident already happened in the 2024 election cycle when a video using an AI voice-cloning tool to imitate Vice President Kamala Harris, falsely attributing statements to her, was shared by tech billionaire Elon Musk on his platform X in July.

Without initially clarifying if it was a parody, it gained significant attention. Musk eventually cleared up its satirical intent, pinning the original creator’s post to his profile and asserting that parody is not a crime.

The video mimics many visuals from a real Harris campaign ad but replaces her voice-over with an AI-generated version. 

The AI voice falsely states, "I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate," and labels Harris a "diversity hire" due to her gender and race, claiming she lacks knowledge of running the country. 

The video maintains "Harris for President" branding and incorporates some genuine past clips of Harris.

Related

Deepfake: What is it, and why is it so dangerous?

Deepfake has been a topic of discussion in recent weeks, especially following incidents that involved well-known Americans. Here's what to know about deepfakes, and the various dangers it can pose to society.

Cell phone apps like Deepswap.ai, Deepfake web and Reface can create deepfakes easily. 

There are currently laws now in 20 states saying this is illegal, and that if you’re going to make a deepfake, you’ve got to label it, so everybody knows exactly what they’re seeing. 

Beginning in 2019, several states began passing legislation to address the use of deepfakes, specifically targeting deceptive, manipulated audio or visual images created with malice and without consent. These laws are not limited to AI-generated content but encompass a broader scope. 

Additionally, some laws have been expanded to protect against the distribution of deceptive media intended to harm a candidate's reputation or deceive voters.

Currently, at least 40 states have pending legislation on the matter. 

Alabama’s HB 172 legislation criminalizes creating and distributing private images without consent, while Florida's HB 919 "Artificial Intelligence Use in Political Advertising," requires disclaimers on political advertisements that utilize AI. The legislation reflects growing concerns over the influence of deepfake technology on elections and the importance of transparency to protect the integrity of the democratic process. 

While states like Michigan have introduced bills to criminalize the use of materially deceptive media in elections, Pennsylvania’s similar efforts have stalled, and West Virginia's HB 4963 — which prohibits the use of deep fake technology to influence an election — failed to pass the state Senate. 

Wiseman says there has been little movement at the federal level — which is why his organization, Public Citizen, has petitioned the Federal Election Commission to issue a rule on this. But he believes they’re "slow-walking the issue" and it doesn’t look good for them acting on it before the election.

The United States has seen in 2016 and 2020 that outside interests from overseas have been using misinformation campaigns to influence voters and when it comes to deepfakes internationally, Wiseman believes they're a huge threat. 

"I actually think the bigger threat is domestic because if this stuff is legal, we should assume that political operatives of all political stripes will use whatever tools are available, regardless of how ethical or unethical they may be," he said. 

"I think if someone is going to make a deepfake video or audio that shows someone doing or saying something that they didn’t do, it’s actually a fraud on the voters," Wiseman added. "There is no First Amendment protection for fraud, and there’s every right for government, at both the state and federal level, to regulate it."

He noted that while there have been few impactful deepfakes in U.S. elections, the phenomenon has already affected outcomes in various global elections. 

"It’s chilling stuff," Wiseman said.

YouTube, Meta, Midjourney, and OpenAI strengthen election safeguards with new policies

A man is seen using OpenAI's ChatGPT chat website. (Credit: Jaap Arriens/NurPhoto via Getty Images)

Social media companies and AI creation platforms have taken steps to safeguard elections by implementing policies that promote transparency, accuracy and accountability. These measures aim to prevent misinformation and ensure a fair democratic process.

Midjourney

Midjourney, an AI image generator, prohibits users from creating images for political campaigns or attempting to influence elections. It also bans content that is aggressive, hateful, or deceptive. Violations result in users being banned from the platform. Additionally, Midjourney advises caution when sharing AI-generated content, encouraging users to consider how others might perceive their creations.

YouTube

With its vast audience, YouTube also has strict policies against misinformation related to elections. The platform prohibits content that misleads voters or falsely claims widespread fraud. YouTube CEO Neal Mohan recently announced AI features for YouTube Shorts, which include tools to create six-second video clips. Mohan emphasized the platform’s commitment to election integrity, with policies in place to prevent manipulated content and misinformation that could cause harm. Users are encouraged to report any content that violates these guidelines.

OpenAI

Known for its powerful generative AI tools like ChatGPT and DALL-E, OpenAI has implemented similar safeguards. The company prohibits the use of its tools for political campaigning and lobbying. It also works to prevent misleading deepfakes and impersonations of candidates. OpenAI has developed tools to improve factual accuracy and transparency, such as efforts to detect AI-generated images and ensure voters can trust the origin of the content they encounter.

Meta

Meta, which operates Facebook, Instagram, and Threads, has also taken proactive measures. The company labels AI-generated content with tags like "Imagined with AI" and is working on a feature to detect AI-generated images from other platforms. This is part of Meta’s effort to provide transparency during major elections, ensuring that users can differentiate between real and AI-created content. The platform also penalizes users who fail to disclose when they post AI-generated video or audio that mimics reality.

2024 ElectionArtificial IntelligencePoliticsU.S.News