OpenAI and Google showcased their latest and greatest AI technology this week. For the last two years, tech companies have raced to make AI models smarter, but now a new focus has emerged: make them multimodal. OpenAI and Google are zeroing in on AI that can seamlessly switch between its robotic mouth, eyes, and ears.
As if using ChatGPT for college essays wasn’t enough, students are now getting a Gemini tool to help with their Math and Physics homework, too. Google is utilizing its Circle to Search gesture, which was received quite well, to introduce a new feature.
OpenAI unveiled GPT-4o on Monday, showcasing the real-time audio features that make it seem like Spike Jonze’s film Her is now becoming a reality. Just like in the movie, OpenAI gave ChatGPT a dynamic, friendly, and arguably flirty, voice that truly sounds just like a human.
OpenAI unveiled GPT-4 Omni (GPT-4o) during its Spring Update on Monday morning in San Francisco. Chief Technology Officer Mira Murati and OpenAI staff showcased their newest flagship model, capable of real-time verbal conversations with a friendly AI chatbot that convincingly speaks like a human.
OpenAI is gearing up for a much-anticipated spring update to GPT-4 and ChatGPT on Monday, and multiple reports are pointing to voice as the next frontier for Sam Altman’s AI company.
The mysterious AI chatbot, “gpt2-chatbot,” returned to the major large language model benchmarking site, LMSYS Org, on Monday night roughly a week after it abruptly disappeared.
Rabbit CEO Jesse Lyu defended the Rabbit R1 for the last week, telling Gizmodo and other outlets that it’s “not an Android app.” Now, he may have to defend the next fatal claim