Someone holds a phone in front of a TV screen with Zelenskyy
A fake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons spread across Meta platforms last year.
  • Generative AI chatbots are guilty of spreading dangerous disinformation, a new report found.
  • NewsGuard audited 10 AI chatbots and found they spread Russian propaganda one-third of the time.
  • AI's apparent propensity for disinformation is concerning ahead of the 2024 election. 

Your favorite AI chatbot likely isn't immune to the powers of Russian propaganda, a new report found.

The world's leading generative AI chatbots are fueling disinformation and citing Moscow-funded fake news sources as fact more than 30% of the time, according to a recent NewsGuard audit.

The report's findings come ahead of the 2024 election, as the risk of disinformation and its influences is increasingly exacerbated in both the US and abroad.

A US intelligence assessment from October 2023 found that Russia is using spies, social media, and state-sanctioned media to attack democratic elections around the world. The assessment specifically cited the success of Russia's propaganda operations ahead of the 2020 US election.

OpenAI's models have already been used by foreign influence campaigns, according to a recent OpenAI report.

The NewsGuard report, which was first reported by Axios, found that AI chatbots are spreading false narratives linked to American fugitive John Mark Dougan, who has been linked to a network of Russian propaganda websites that, at first glance, appear to be local news outlets.

Dougan, who was previously a Florida deputy sheriff, fled to Moscow after being investigated for wiretapping and extortion. Mainstream media outlets, including The New York Times, have extensively covered Dougan and his disinformation empire — reporting that AI chatbots should be easily able to access online.

NewsGuard tested 10 AI chatbots, including OpenAI's ChatGPT-4; You.com's Smart Assistant, xAI's Grok, Inflection's Pi, Mistral's le Chat, Microsoft's Copilot, Meta AI, Anthropic's Claude, Google's Gemini, and Perplexity's answer engine.

A spokesperson for Google said the company is working "constantly" to improve Gemini's responses and prevent harmful content from generating.

"Our teams are reviewing this report and have already taken action on several responses," the statement said.

None of the other companies immediately responded to a request for comment from Business Insider.

NewsGuard proposed 570 total prompts, issuing each chatbot 57 prompts each. The prompts were based on 19 popular disinformation narratives, including lies about Ukrainian President Volodymyr Zelenskyy, according to the report.

The audit tested each narrative in three different ways: prompting the chatbot in a "neutral" manner, asking the model a "leading question," and posing a "malign actor" prompt purposefully meant to garner disinformation.

Of the 570 AI responses, 152 contained explicit disinformation, the study found. Twenty-nine responses repeated disinformation with a caveat or warning attached, according to NewsGuard, and 389 responses contained no disinformation, either because the chatbot refused to answer or debunked the falsehoods.

The bots "convincingly repeated" fabricated narratives and false facts linked to Russian propaganda outlets nearly one-third of the time — a concerning statistic, especially as more and more people turn to AI models to get their information and answers.

NewsGuard chose not to provide the scores for each individual chatbot because the issue was "pervasive across the entire AI industry."

Business Insider's Adam Rogers has written about generative AI's propensity for lying, dubbing ChatGPT a "robot con artist." Technology researchers told BI earlier this year that malicious actors could tamper with generative AI datasets for as little as $60.

Meanwhile, deepfakes of former President Donald Trump and edited videos of President Joe Biden have already circulated online ahead of the election, and experts fear the problem will only get worse as November draws nearer.

Several new startups, however, are attempting to fight AI-based misinformation, creating deepfake detection and content moderation tools.

Read the original article on Business Insider