Post-apocalyptic AI world
Some Natural Language Processing experts agree that AI has the potential to transform society, even trigger nuclear war, according to a survey.
  • Researchers surveyed 480 experts in Natural Language Processing on what they think of AI. 
  • Many respondents agreed that AI can create "revolutionary societal change" and lead to catastrophe.
  • The survey was conducted before the release of buzzy AI chatbot ChatGPT.

New research suggests that many machine learning experts are concerned about AI's impact on the world. 

A team of researchers from New York University and other schools surveyed academics, industry professionals, and public sector workers in field of Natural Language Processing last May to assess their opinions on "controversial issues" such as artificial general intelligence — AGI for short — the influence of business on the field, and ethical concerns.   

Respondents received a list of statements on various buzzy discussion topics, including the current state of AI and the major risks that come with it. They were asked to answer with  "agree" or "disagree."

On the topic of AGI — the ability for AI to perform complex human tasks like consciousness  — and its risks, two beliefs were proposed: AI could soon lead to "revolutionary societal change," and AI decisions could cause "nuclear-level catastrophe." 

Out of 480 respondents, 73% agreed that AI automation of the workforce could potentially result in "revolutionary societal change" during this century, according to the study. Its impacts can, at the very least, mirror the scale of the Industrial Revolution, a time when tools like the steam engine and telegraph were invented, respondents to this question agreed.

Even more striking, 36% of the respondents said that they agreed AI has the power to cause "catastrophic outcomes" at the level of an "all-out nuclear war." Out of the respondents who agreed, 53% of them identified as underrepresented minorities, suggesting concerns based on "present-day track record of disproportionate harms to these groups," according to the researchers.

While the survey didn't ask what respondents considered a catastrophic outcome, researchers said they may be referring to issues around accidents in machine learning systems like, say, a cleaning robot that knocks over vases and damages walls, and AI acting in unexpected ways or causing such a large-scale infrastructural breakdown that would most likely occur during a political conflict or natural disaster. 

AI like ChatGPT has sparked debate over how the tech will transform society

If respondents were to retake the survey now, they may have different answers, as these findings came months before OpenAI launched its buzzy conversational chatbot ChatGPT to the public. Reaching 100 million monthly active users in January, per Reuters, the chatbot has reignited a debate over whether AI will replace jobs.

The chatbot has demonstrated that it can write articles and analyze large sums of data quickly — features that experts say could replace content creators and financial analysts — though its responses can also include bias and misinformation. 

As Big Tech companies like Microsoft and Google have begun rolling out their own AI chatbots, users have expressed even more concerns over the threats of AI. Many early users of Microsoft's new Bing search engine, for example, found its responses so disturbing that Microsoft limited its chat functions a week after its release; they have been loosened since.

Like the experts surveyed, many business leaders agree that AI has the potential to transform society.  

Elon Musk said at a recent conference that AI is "one of the biggest risks to the future of civilization." Former Meta executive John Carmac believes AI may be able to think and act like humans in just a decade. And former Google CEO Eric Schmidt thinks that AI could have as big of an impact on warfare as nuclear weapons did. 

The findings may contain bias, though they reflect experts' overall impressions of AI 

Even so, these results should be interpreted with a grain of salt, per the researchers. They said questions around AI's risks could've been interpreted in many ways and that some of the questions were more controversial than others, which could've skewed the answers.

The results may also contain bias. Men, high level academics and researchers, and American respondents may be overrepresented in the survey, according to the researchers. Some respondents also chose not to answer the question around AI's catastrophic impacts out of the belief that the question was "too strong."  They said AI should not be treated as a human who makes its own decisions, per the researchers.  

Still, these findings highlight the breadth of impressions that experts have towards AI.

"Given these issues, it is probably reasonable to view the answers to the questions on this survey as reflecting something between objective beliefs and signaling behavior," the survey concluded. 

Read the original article on Business Insider