- Elon Musk is defending himself for reposting a deepfake of Kamala Harris, writing "Suggon Deeznutz."
- The video, a parody of Harris' campaign ad, appears to have been digitally altered.
- Musk's repost lacked context, potentially breaching X's rules on synthetic and manipulated media.
After reposting a deepfake video of Vice President Kamala Harris on X, Elon Musk is defending himself with a not-safe-for-work clapback.
The video, first re-shared by Musk on Friday evening, features an apparently AI-generated voice of Harris playing over the visuals of her real campaign ad. The video's fake Harris insults herself as the "ultimate diversity hire" and a "deep state puppet."
In response to California Gov. Gavin Newsom writing on X that it should be illegal to digitally alter ads like this, Musk posted early Monday morning, "I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America."
Musk doubled down in two follow-up comments, writing, "Not to mention Pullitsir Prize winner Dr Head, first name Dick" and "Newsom should create an endowed 😉 chair at Univ of California for Prof Deeznuts."
The video was originally posted by the user @MrReaganUSA, who noted that the clip was a "parody" of Harris' first campaign ad since becoming the presumptive Democratic Party nominee for the 2024 presidential election.
In the video, the edited voice-over says, "I was selected because I am the ultimate diversity hire. I'm both a woman and a person of color, so if you criticize anything I say, you're both sexist and racist."
The deceptive voice-over also calls Biden senile and says Harris does "not know the first thing about running the country."
In his repost of the clip, which has been viewed more than 129 million times, Musk failed to note that the video had been edited, writing only: "This is amazing 😂."
That may just run afoul of X's policy on synthetic and manipulated media, which states: "You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm ("misleading media")."
X says that for the company to take action and remove or label a post that violates that policy, it must "include media that is significantly and deceptively altered," "shared in a deceptive manner or with false context," or that is likely to cause "widespread confusion on public issues."
The company says that it will consider factors including "whether there are any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added, edited, or removed that fundamentally changes the understanding, meaning, or context of the media."
So far there's been no sign that X — the platform Musk now owns — is going to punish him.
This is a violation of @X’s policies on synthetic media & misleading identities. Are you going to retroactively change them to allow violations in an election year? cc @lindayaX @nickpickles pic.twitter.com/MjB6HPlhOW
— Alex Howard (@digiphile) July 27, 2024
The deepfake boom
Deepfakes use artificial intelligence to replace a person's likeness with that of someone else in video or audio footage.
Audio deepfakes are relatively simple to create but are difficult to detect, studies have found.
A number of politicians have already fallen victim to the technology in the past, highlighting their potential to wreak havoc around election times.
In one clip that was circulating on social media last year, Hillary Clinton appeared to give a surprise endorsement of Florida Governor Ron DeSantis. However, the clip was revealed to have been AI-generated, Reuters reported.
Biden was also on the receiving end of a deepfake following his announcement that he was dropping out of the 2024 presidential election race.
A video on social media appeared to show the president hitting out at his critics and cursing them. But again, the footage was a deepfake, per the AFP news agency.