- Thousands of AI researchers have shared their views on the future of AI in a new study.
- Almost 58% of 2,778 researchers said they thought the extinction threat from AI was about 5%.
- Whether AI poses a significant threat to humanity has been an intense debate in Silicon Valley.
Over the last year, we've heard a lot about the risk of AI destroying humanity.
Industry leaders and AI heavyweights said the rapid development of the technology could have catastrophic consequences for the world.
But, while most AI researchers recognize the possibility of existential threats, they don't think the dramatic outcomes are very likely, the largest survey of AI researchers has found.
In the survey, the 2,778 participants were asked questions about the social consequences of AI developments and possible timelines for the future of the tech.
Almost 58% of those surveyed said they considered the threat of human extinction or other extremely bad outcomes brought about by the tech to be around 5%.
The study was published by researchers and academics at universities around the world, including Oxford and Bonn in Germany.
One of the paper's authors, Katja Grace, told The New Scientist the survey was a signal that most AI researchers "don't find it strongly implausible that advanced AI destroys humanity." She added there was a "general belief in a non-minuscule risk."
Whether AI poses a significant threat to humanity has been an intense debate in Silicon Valley in the last few months.
Several AI experts, including Google Brain cofounder Andrew Ng and AI godfather Yann LeCun, have dismissed some of the bigger doomsday scenarios. LeCun has even accused tech leaders such as Sam Altman of having ulterior motives for hyping AI fears.
In October, LeCun said some of the leading AI companies were trying to provide a "regulatory capture" of the industry by pushing harsh regulation.