1. Runway, an AI video generation platform, has added a new lip sync feature to its toolkit, allowing users to animate mouth and face movements in time to audio clips.
2. The new lip sync feature utilizes natural-sounding synthetic voices from ElevenLabs and allows users to clone their own voice using ElevenLabs technology within the Runway interface.
3. Tests of the Runway lip sync feature show impressive results in animating characters and faces, though some challenges are faced in animating characters at a distance or with complex features like beards.
Runway, a top artificial intelligence video generation platform, has added a new lip sync feature to its toolkit, giving characters a voice and making them interactive. The feature uses ElevenLabs synthetic voices and allows users to clone their own voice using this technology. Lip sync is important for AI videos to become mainstream, and Runway’s feature animates both the mouth and face movements.
Tests were conducted to evaluate Runway’s lip sync feature. Different characters were created using various AI tools, such as Leonardo and MidJourney, and Runway’s Generative Voice tool was used to animate them with voices. The results varied, with some characters having more realistic lip animations and facial expressions than others.
In one test, Runway animated an older man staring into the distance, syncing the mouth movements to selected voice. This test was considered one of the most impressive, showing the potential of AI-generated content. However, when animating the mouth of an action figure, Runway’s performance was not as good as a similar tool from Pika Labs.
Overall, Runway’s lip sync feature is seen as impressive and realistic compared to other models. It allows for longer monologues and dialogues, with added head movement animation for a more realistic feel. While the technology is still new, it shows great potential for enhancing AI video content creation.