- Big Tech is finally acknowledging AI's risks. Consumers have mentioned them for months.
- SEC filings show that Meta, Microsoft, and others are concerned about the issues AI might present.
- Misinformation and harmful content are some of the "risk factors" associated with generative AI.
It seems like artificial intelligence is everywhere these days, but some of the biggest players in tech are finally acknowledging the risks it poses.
AI has been the biggest topic of discussion in the tech industry since OpenAI unveiled ChatGPT to the public in November 2022. In the months since, companies like Google, Meta, Microsoft, and others have invested heavily in their AI efforts.
Major tech companies have been loud about their plans to enlist in the AI arms race, but more recently, they've been quietly addressing ways it could actually be bad for business.
In its 2023 annual report, Google's parent company, Alphabet, said its AI products and services "raise ethical, technological, legal, regulatory, and other challenges, which may negatively affect our brands and demand."
Similarly, Meta, Microsoft, and Oracle have also included their concerns about AI in Securities and Exchange Commission filings, usually under the "risk factors" section, Bloomberg reported.
Microsoft said its generative AI features "may be susceptible to unanticipated security threats from sophisticated adversaries."
"There are significant risks involved in developing and deploying AI and there can be no assurance that the usage of AI will enhance our products or services or be beneficial to our business, including our efficiency or profitability," Meta's 2023 annual report said.
Meta went on to list factors, including misinformation (specifically during elections), harmful content, intellectual-property infringement, and data privacy as ways generative AI could be bad for users and leave the company vulnerable to litigation.
Meanwhile, the public has been vocal about concerns with AI making some jobs obsolete, large language models training on personal data, and the spread of misinformation.
A group of current and former OpenAI employees signed a letter on June 4 to tech companies demanding they do more to mitigate the risks of AI and protect employees who raise questions about its safety.
These range from "the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," the letter read.
Meta, Google, and Microsoft didn't immediately respond to a request for comment from Business Insider.