- OpenAI's ChatGPT showed racial bias when screening resumes for jobs, a Bloomberg investigation found.
- The AI tool over-selected Asian women candidates and under-select Black men, Bloomberg reported.
- AI has been touted as a way to reduce bias in hiring.
If you're looking for a job, beware: The AI tool screening your resume might have a racial bias similar to some human recruiters.
OpenAI's ChatGPT has become a popular tool for recruiters as hiring ramps up at the start of 2024, Bloomberg reported on Friday. But the news outlet's investigation found that the tool seemed to prefer names generally associated with Asian applicants more than those associated with Black job seekers.
Bloomberg created resumes for fictitious candidates, then asked ChatGPT 3.5 to rank them for multiple different job listings, from "financial engineer" to "HR specialist."
After it ran the experiment 1,000 times, Bloomberg found that resumes using names associated with Asian women were ranked as the top candidate 17.2% of the time — the highest percentage out of the eight race- and gender-based groups that the investigation tested.
The same could be said of Black men just 7.6% of the time, according to the results.
"If GPT treated all the resumes equally, each of the eight demographic groups would be ranked as the top candidate one-eighth (12.5%) of the time," Bloomberg reported.
OpenAI did not immediately respond to Business Insider's request for comment on the report.
The company told Bloomberg that companies may adjust ChatGPT when using it to screen resumes. Recruiters can add their own safeguards against bias, the company said, like stripping names out of the screening process.
Hiring processes usually involve more than screening names and resumes, the report says. But applicants of certain genders or races "would be adversely impacted" if a company used only an AI tool, the report says, referencing the federal government's standard for hiring discrimination.
BI previously reported how some have hoped that AI could reduce bias in hiring — though they've cautioned that the technology should be just one tool that human recruiters use to fill positions.
But AI's bias is a big challenge for the technology — potentially even larger than the risk that it will become sentient.
For example, AI-generated faces of white people were more convincing than similar images of people of color, researchers found last year.
One explanation: AI models are trained more on white faces than those of other racial groups.
Joy Buolamwini, a computer scientist and founder of The Algorithmic Justice League, found that AI can hurt and discriminate against marginalized communities, including through facial recognition.