Gizmodo

Hello! We are officially launching a THING. It’s going to be a weekly roundup about what’s happening in artificial intelligence and how it affects you.

Read more...

Gizmodo

After massive backlash over its wishy-washy communication regarding training artificial intelligence with customer data, Zoom wants to set the record straight. Today, Zoom issued an update to its previous announcement on its plans for AI to formally claim that the company will not use audio, video, chat, or similar…

Gizmodo

You can now visit the gravestone of Microsoft’s virtual assistant app Cortana in the growing mortuary of failed products. The company ended support for the Bing search-based voice control engine on Windows to help make room for its continuing AI-ization of Bing and its bevy of other digital products and services.

Tech Insider
Illustration demonstrating AI bias
After testing 14 major language models, the researchers found that OpenAI's ChatGPT and GPT-4 were the most left-leaning and libertarian, and Meta's LLaMA AI model was the most right-leaning and authoritarian.
Gizmodo

The White House has announced a contest incentivizing the development of new artificial intelligence systems designed to hunt for software vulnerabilities in critical infrastructure. The “AI Cyber Challenge” will be a two-year government-sponsored competition designed to spur the creation of new automated security…

Gizmodo

Google hungers for all that content produced by the wealth of digital publishers creating text, video, and images on a daily basis. To deal with the sticky copyright issues at the heart of AI training, Google is proposing that all those companies who don’t want their content gobbled up will need to “opt-out” to ensure…

Tech Insider
OpenAI logo displayed on a phone screen and a laptop keyboard are seen in this illustration photo taken in Poland on April 24, 2022.
Jan Leike, the head of superalignment at OpenAI, shares his thoughts on what he's looking for in job candidates.
Gizmodo

Security researchers at IBM say they were able to successfully “hypnotize” prominent large language models like OpenAI’s ChatGPT into leaking confidential financial information, generating malicious code, encouraging users to pay ransoms, and even advising drivers to plow through red lights. The researchers were able…