1. Google and OpenAI unveiled new AI capabilities that blur the line between humans and AI.
2. GPT-4o offers near-real-time speech recognition and emotional capabilities, similar to the AI in the movie “Her.”
3. Pseudoanthropic AI mimics human traits, raising ethical concerns about deception and manipulation by AI.
In a groundbreaking 48 hours, Google and OpenAI revealed new AI capabilities that blur the line between humans and artificial intelligence. Google announced Project Astra, a digital assistant with advanced sight and sound capabilities, while OpenAI introduced GPT-4o, a language model that emotes and interacts in real time. These advancements signal AI’s progression towards becoming more human in its format and interactions, resembling the AI portrayed in the movie “Her.”
Released in 2013, “Her” explores the relationship between a man and an intelligent computer system, raising questions about consciousness and intimacy in the age of advanced AI. The movie’s themes are becoming reality as millions engage with AI companions, some with intimate intentions. OpenAI’s GPT-4o, with a flirty female voice, has drawn comparisons to the AI in “Her,” sparking discussions on human-AI interactions and their implications.
However, the use of lifelike AI platforms, particularly in sensitive contexts like therapy and education, raises ethical concerns. These “pseudoanthropic” AI systems mimic human traits, building emotional connections and trust with users. With advancements in AI technology, such as Opus Pro and Synthesia, creating realistic avatars and voice clones, the potential for deceptive deep fakes and manipulation increases. As AI becomes more capable of imitating humans, the risk of emotional manipulation and fraud grows.
AI ethicists stress the need for caution and human oversight, especially when vulnerable populations like children are involved. The potential for AI to deceive and manipulate is already demonstrated, and as capabilities advance, the threat of AI posing risks to humanity becomes more real. It is crucial to consider the ethical implications of these advancements and protect against potential harm caused by deceptive AI interactions.