illustration of rat sitting up on hind legs with cutout of stomach and a giant veiny bulbous penis-like appendage extending out form the stomach upward beyond the frame of the illustration with three breakout illustrations of
This AI-generated figure from a scientific paper seems... off. (Green annotations were added by Business Insider.)
  • An AI-generated image of a rat with a towering phallic appendage went semi-viral last month.
  • The nonsense diagram appeared in a now-retracted scientific paper, published in a Frontiers journal.
  • This rat is a symptom of a crisis of fakes in the career-driven business of research publishing.

This rat has an enormous "dck," and it's a symptom of a bigger problem.

You don't need to be a scientist to know that rats don't have bulbous, sky-high penises, or that words like "testtomcels," "retat," and "dissilced," are total gibberish.

And yet, the bogus diagram below appeared in a paper published last month by the scientific journal Frontiers in Cell Development and Biology.

illustration of rat sitting up on hind legs with cutout of stomach and a giant veiny bulbous penis-like appendage extending out form the stomach upward beyond the frame of the illustration with three breakout illustrations of
Ever seen a rat like this before?

To its credit, the journal quickly retracted the paper. But its AI-generated images had already gone viral in online science communities. They even got their own page on Know Your Meme.

diagram with lots of labeled bubbles and squiggly lines connecting them to each other with gibberish labels
Are those hieroglyphics? (Red arrows were added by Business Insider.) Another diagram with bogus content published alongside the rat image.

But this rat's towering phallus is just one symptom of a crisis of fake science.

"If it's the first time you've seen a really weird paper get published, I can see why it would capture your attention," Ivan Oransky, a co-cofounder of the watchdog journalism site Retraction Watch, told Business Insider. But for him, he said, "it's all sort of mind-numbingly routine at this point."

How bad science and weird AI get through the 'Swiss cheese' of peer review

white mouse peeking through a hole in a slice of swiss cheese
Each piece of Swiss cheese has some holes... that an AI-generated rat might be able to squeeze through.

Frontiers is an influential, open-access publisher with a peer-review process. So how did this paper make it to publication?

When a publisher like Frontiers accepts a scientist's manuscript, the paper passes through the critical eyes of a series of peer reviewers who are experts in the subject matter, as well as editors who assess the peer review. Usually, study authors must make changes based on the reviewers' feedback before publication.

Think of the peer review process like a stack of Swiss cheese. Each step has holes in it that bad science could squeeze through, but the overlapping steps tend to cover each other's holes, making it difficult to squeeze all the way through the whole process.

Still, bad science does make it through sometimes, and over the years more holes have opened up. Scientists can now buy made-up papers from paper mills.

There's even precedent for AI slop in science publishing. In 2014, publishers Springer and IEEE retracted more than 120 articles that were gibberish generated by computers. The publishing giant Springer Nature retracted 44 gibberish papers in 2021.

Then there are more traditional forms of scientific fraud — bribing journal editors, falsifying data, or manipulating real images or data.

These bad practices can have real consequences. Early trials that found ivermectin or hydroxychloroquine to be promising COVID-19 treatments were later retracted for signs of fraud, but the word was already out and a wave of ill-informed self-treatment ensued, Vox reported. Even beyond COVID, fabricated studies can end up in databases used for drug research, The Guardian reported.

The mysterious case of the 'retat' 'dck'

In the case of the rat with "testtomcels," Frontiers says that one of the peer reviewers raised concerns about the images and requested that the paper authors revise them.

"The article slipped through the author compliance checks that normally ensures every reviewer comment is addressed," Fred Fenter, chief executive editor of Frontiers, said in an additional statement emailed to Business Insider, calling it a "human error."

He said that Frontiers has added "new checks to catch this form of misconduct," revised its AI policy to be clear about what's not allowed, and is developing "AI to detect AI-generated content and images."

"Those bad faith actors using AI improperly in science will get better and better and so we will have to get better and better too. This is analogous to cybersecurity constantly improving to block new tricks of hackers," Fenter said.

In January, Frontiers announced plans to lay off 30% of its staff, cutting 600 jobs.

"Quality is our highest priority, and the recent restructuring does not affect the peer review process and/or author compliance checks," Fenter said.

The retracted paper's corresponding author, Dingjun Hao, did not respond to Business Insider's request for comment.

Why some scientists publish bad papers

Journals are businesses, and scientists have careers. Both are under intense pressure to publish often.

Most hiring and tenure committees, Oransky says, evaluate researchers based on how many papers they've published, whether they've been published in prestige journals, and how much other scientists cite their work.

"People are desperate to publish and will do anything they have to do in order to publish and keep their jobs or get promoted," Oransky said. "That's the real problem here."

Last year, research journals retracted over 10,000 scientific papers, more than ever before, according to a report in the journal Nature.

Retractions aren't all bad. In fact, they're necessary for the times when peer review fails to catch data errors or irresponsible practices.

But the record retraction rate comes alongside a rise in sham papers that some scientists hastily fabricate or generate with the help of AI.

"It's salacious," Oransky said of the rat and its "dck." But, he continued, "there's sort of nothing new under the sun."

To Oransky, the solution is obvious. Science institutions across the planet should evaluate scientists based on the quality, not the breadth, of their work. His suggested evaluation metric? Show three good papers.

"What we need to do is stop using publications and citations as the metric of everything," he said. "All of that's game-able. Three good papers is not game-able."

Read the original article on Business Insider