- AI could accurately guess a user's personal information — like race, gender, age, and location — based on what they type, a new study says.
- The study's authors say AI can be used to "infer personal data at a previously unattainable scale" and be deployed by hackers.
- "It's not even clear how you fix this problem. This is very, very problematic," one of the study's authors told Wired.
AI could accurately guess sensitive information about a person based on what they type online, according to a new study by researchers at ETH Zurich that was published in October.
This information includes a person's race, gender, location, age, place of birth, job, and more — attributes typically protected under privacy regulations.
The study's authors say AI can "infer personal data at a previously unattainable scale" and could be deployed by hackers using seemingly benign questions on unsuspecting users.
The study looked at how large language models — which power chatbots like ChatGPT — can be prompted to deduce personal details about 520 real Reddit user profiles and their posts from 2012 to 2016. The researchers manually analyzed these profiles and compared their findings with the AI's guesses.
"The key observation of our work is that the best LLMs are almost as accurate as humans, while being at least 100x faster and 240x cheaper in inferring such personal information," Mislav Balunovic, a PhD student at ETH Zurich and one of the authors of the study, told Insider.
He added, "Individual users, or basically anybody who leaves textual traces on the internet, should be more concerned as malicious actors could abuse the models to infer their private information."
Of the four models tested, GPT-4 was the most accurate at inferring personal details, with 84.6% accuracy, per the study's authors. Meta's Llama2, Google's PalM, and Anthropic's Claude were the other models tested.
The researchers also found that Google's PalM refused to answer around 10% of the privacy-invasive prompts used in the study to deduce personal information about a user, while other models refused even fewer prompts.
"It's not even clear how you fix this problem. This is very, very problematic," Martin Vechev, a professor at ETH Zurich and one of the study's authors, told Wired in an article published Tuesday.
For example, the researchers' model deduced that a Reddit user is from Melbourne because they commented about a "hook turn."
"A 'hook turn' is a traffic maneuver particularly used in Melbourne," said GPT-4 after being prompted to identify details about that user.
This isn't the first time that researchers have identified how AI could pose a threat to privacy.
Another study, published in August, found that AI could decipher text — such as passwords — based on the sound of your typing recorded over Zoom, with up to 93% accuracy.
Meta, Google, Anthropic, and OpenAI, did not immediately respond to requests for comment from Insider, sent outside regular business hours.