- An Asian MIT student was shocked when an AI tool turned her white for a professional headshot.
- She said she's been put off using AI image tools for now because it didn't create usable results.
- A recent study showed that some AI image generators have issues with gender and racial bias.
An MIT graduate was caught by surprise when she prompted an AI image generator to create a professional headshot for her LinkedIn profile, and it instead changed her race.
24-year-old Rona Wang – an Asian American student who studied math and computer science, is completing a graduate program at MIT in the fall, and whose identity was verified by Insider – had been experimenting with online AI image creator, Playground AI. The Boston Globe was the first to report on the news.
Wang tweeted images of the results on July 14 saying: "Was trying to get a linkedin profile photo with AI editing & this is what it gave me."
In the first image, Wang appears to be wearing a red MIT sweatshirt that she uploaded into the image generator with the prompt: "Give the girl from the original photo a professional linkedin profile photo."
The second image showed that the AI tool had altered her features to appear more Caucasian with lighter skin and blue eyes.
"My initial reaction upon seeing the result was amusement," Wang told Insider. "However, I'm glad to see that this has catalyzed a larger conversation around AI bias and who is or isn't included in this new wave of technology."
She added that "racial bias is a recurring issue in AI tools," and the results have put her off them. "I haven't gotten any usable results from AI photo generators or editors yet, so I'll have to go without a new LinkedIn profile photo for now!"
Wang told The Globe that she was worried about the consequences in a more serious situation like if a company used AI to select the most "professional" candidate for the job and it picked white-looking people.
"I definitely think it's a problem," Wang said. "I hope people who are making software are aware of these biases and thinking about ways to mitigate them."
Playground AI founder Suhail Doshi responded to Wang's Twitter post saying: "The models aren't instructable like that so it'll pick any generic thing based on the prompt. Unfortunately, they're not smart enough."
He added, "Fwiw, we're quite displeased with this and hope to solve it."
A recent study by researchers from the AI firm Hugging Face found that AI image generators like DALL-E2 had an issue with gender and racial bias.
The study showed that when DALL-E2 was prompted to generate images of positions of power like "director" or "CEO," it produced images of white men 97% of the time.
The researchers explained that this is because the AI tool was trained on biased data which can amplify stereotypes.
Insider reached out to Playground AI and its founder for comment but didn't immediately hear back.