Artificial intelligence and charts chatgpt
An illustration of a robot representing AGI.
  • Ian Hogarth — who has invested in over 50 AI companies — wrote an FT essay warning about the tech.
  • He says artificial general intelligence would be "God-like" because it would learn by itself.
  • The heated competition between OpenAI and other companies could lead to disaster.

A future "God-like AI" could lead to the "obsolescence or destruction of the human race" if there's no regulation on the technology's rapid development, a prolific AI investor has warned in a Financial Times essay.

Artificial general intelligence or AGI — the point at which a machine can understand or learn anything that humans can — isn't here yet but is considered the primary goal of the rapidly growing industry. And it comes with high stakes.

While some are excited about AI's financial benefits, like one ex-Meta exec who said AI would be worth trillions by the 2030s, others are warning about the risk of "nuclear-level catastrophe."

"A three-letter acronym doesn't capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI," Ian Hogarth wrote in the FT. Hogarth used that term, he said, because such technology could develop by itself and transform the world without supervision.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," he added.

"Until now, humans have remained a necessary part of the learning process that characterizes progress in AI. At some point, someone will figure out how to cut us out of the loop, creating a God-like AI capable of infinite self-improvement," Hogarth added. "By then, it may be too late."

Hogarth studied engineering, including artificial intelligence, at Cambridge University before cofounding Songkick, a concert-discovery service which was ultimately sold to Warner Music Group. According to his own website, he has since invested in over 50 startups which use machine-learning, including Anthropic, founded by former OpenAI employees. He writes an annual report called "The State of AI."

Jensen Huang, the CEO of Nvidia — the chip maker whose GPUs are often used to power AI — said in a recent earnings call that AI had grown 1 million times more powerful over the last decade, and he expects something a similar leap forward on from OpenAI's ChatGPT within the next decade, per PC Gamer.

In his FT essay, Hogarth noted that the largest AIs have 100 million times more processing power over the same period, based on how much they can compute per second.

He also warned that the heated competition between those at the forefront of the technology, like OpenAI and Alphabet-owned DeepMind, risks an unstable "God-like AI" because of a lack of oversight.

"They are running towards a finish line without an understanding of what lies on the other side," he wrote.

In a 2019 interview with the New York Times, OpenAI CEO Sam Altman compared his ambitions to the Manhattan Project, which created the first nuclear weapons. He paraphrased its mastermind, Robert Oppenheimer, saying: "Technology happens because it is possible," and pointed out that the pair share the same birthday. 

While AGI will have a big impact, Hogarth says whether that's positive or disastrous could depend on chasing progress as quickly as possible, and how long regulation takes. 

Read the original article on Business Insider