Ali Alkhatib, an independent AI-ethics researcher, is shown in a blue shirt underneath Insider's AI 100 series logo.
Ali Alkhatib, an independent AI-ethics researcher
  • Ali Alkhatib, an AI-ethics researcher, says large AI systems should not work for everything.
  • Companies make grand claims about what their models can do, but this can cause significant harm.
  • The nonconsensual power dynamics in which big AI is produced risk degrading the internet.

AI ethicists are exhausted. The meteoric rise of companies such as OpenAI, Google DeepMind, and countless startups is giving a headache to the people who study and set standards for the technology. 

Researchers are spending more time critiquing artificial-intelligence systems for their grandiose claims and unacknowledged harms. This takes time away from researchers being able to develop more-thoughtful technology.

"The space rewards unreasonable claims about what you can do. To make those claims, you have to be pretty willing to be transgressive," said Ali Alkhatib, an independent AI-ethics researcher and former interim director of the University of San Francisco's Data Institute. "The larger you make an algorithmic system, the more likely it is to make recommendations that are beyond the realm of its expertise."

AI should not work for everybody and everything, Alkhatib said, but be applied to the specific tasks and context for which it has been trained. 

"It's frustrating, but people who are training algorithmic classification or generative systems that are appropriate for the cases that they're being deployed are inherently going to be narrowly constrained and narrowly bounded," he said. "They're just not going to look like big-picture things like what OpenAI is doing. Because inherently, what OpenAI is doing is sort of unreasonable, which is a challenging thing for them to acknowledge or face."

OpenAI did not respond to requests for comment. 

AI offerings from large companies often work by ingesting large amounts of data on the internet to train models, which makes it difficult, if not impossible, for internet users to consent to their information being used in this way, Alkhatib said. 

The companies that deploy AI models and services can also shift blame away from themselves by talking about their offerings in ways that absolve them of responsibility, he added. For instance, companies may say their AI systems are "sentient" or reaching artificial general intelligence, which is the ability of technology to achieve human-level understanding and capability across different tasks and topics.

"One thing I'm worried about in the discourse about artificial general intelligence is the sense that people are not trying to make a broader sense of consciousness," Alkhatib said. "Companies fund researchers to talk about this, and the people who talk about this on their own seem interested in developing plausible deniability for accountability or responsibility."

He added: "By saying that ChatGPT is semisentient or something like that, it shifts responsibility for the harms that it causes away from the company that put it out there in the world."

Alkhatib points to the startup Hugging Face as an example of a more-responsible AI company. Still, given Big Tech's AI resources, it's difficult to imagine a world in which the work of Alkhatib and his fellow ethicists gets easier anytime soon.

"I saw a conspicuous trend of people in the AI-fairness and -ethics space tweeting, 'I'm totally burnt out. I don't know how I can do this for the next several years,'" Alkhatib said. "I'm glad that people are talking about it, but all of us are going through it."

Read the original article on Business Insider