TLDR
- Elon Musk shared a manipulated AI video of Vice President Kamala Harris on X (formerly Twitter)
- The video used AI to mimic Harris’s voice, making false claims she did not say
- Musk initially shared the video without labeling it as parody or AI-generated
- This raised concerns about AI’s potential to mislead voters ahead of the U.S. election
- The incident highlights the lack of clear regulations for AI use in political content
A manipulated video of Vice President Kamala Harris, shared by Elon Musk on his social media platform X, has ignited a debate about the use of artificial intelligence (AI) in politics. The incident raises concerns about the potential for AI to mislead voters as the United States approaches its presidential election.
On July 28, 2024, Musk shared a video that used AI to mimic Harris’s voice, making false claims she did not actually say. The video, originally created as a parody, used visuals from a real Harris campaign ad but replaced the audio with an AI-generated voice impersonating the vice president.
This is amazing 😂
pic.twitter.com/KpnBKGUUwn— Elon Musk (@elonmusk) July 26, 2024
In the manipulated video, the fake Harris voice claims she is running for president because Joe Biden “finally exposed his senility.” It also refers to her as a “diversity hire” and states she doesn’t know “the first thing about running the country.”
Musk initially shared the video with the caption “This is amazing” and a laughing emoji, without explicitly noting it was a parody or AI-generated. His post gained over 123 million views within days, according to X’s metrics.
The incident has sparked criticism from various quarters. Mia Ehrenberg, a Harris campaign spokesperson, stated,
“We believe the American people want the real freedom, opportunity and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump.”
Experts in AI-generated media have confirmed that much of the audio in the fake ad was created using AI technology. Hany Farid, a digital forensics expert at the University of California, Berkeley, noted, “The AI-generated voice is very good. Even though most people won’t believe it is VP Harris’ voice, the video is that much more powerful when the words are in her voice.”
The incident has highlighted the potential dangers of AI in political discourse. Rob Weissman, co-president of the advocacy group Public Citizen, expressed concern that many people might be fooled by such videos. “I don’t think that’s obviously a joke,” Weissman said. “I’m certain that most people looking at it don’t assume it’s a joke. The quality isn’t great, but it’s good enough.”
The sharing of this video has also raised questions about X’s content policies. The platform’s rules prohibit sharing “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” However, there is an exception for memes and satire, as long as they do not cause “significant confusion about the authenticity of the media.”
This event underscores the lack of comprehensive federal regulations on AI use in political content. While more than one-third of U.S. states have created laws regulating AI in campaigns and elections, according to the National Conference of State Legislatures, there is no overarching federal legislation.
In response to the incident, California Governor Gavin Newsom announced plans to sign a bill in the coming weeks to make such manipulations illegal. “Manipulating a voice in an ‘ad’ like this one should be illegal,” Newsom stated on X.