New Sora video released by OpenAI for TED Talks is mind-blowing

by

in

1. OpenAI’s latest Sora video promotes a new season of TED Talks focusing on artificial intelligence.
2. The video was created by artist Paul Trillo using Sora, a text-to-video model, with 330 clips generated and edited down to 25 clips.
3. The video features rapid motion through research labs, factories, and lecture halls, showcasing the potential of generative AI video for storytelling.

OpenAI’s latest video, created using their Sora text-to-video model, showcases a rapid fly-through of innovation and conversation with hints of red, promoting a new season of TED Talks focusing on artificial intelligence in 40 years time. The video was created by professional video producer Paul Trillo and features a rollercoaster ride through research labs, factories, and lecture halls, ending with a shot of someone giving a talk on stage. The video, created using Sora, highlights the capabilities of AI in generating compelling visual content, with all motion and shots being AI-generated except for the TED logo.

Currently, only a small group of OpenAI approved artists and creators have access to Sora, but this is expected to change in the future as OpenAI plans to integrate Sora into ChatGPT and third-party tools like Adobe Premiere Pro. Director Paul Trillo had to create over 330 clips from text prompts to produce the final 1:33 clip for the TED Talks video, showcasing the versatility and potential of generative AI in storytelling.

The video opens with an explosive scene and takes viewers on a journey through different cities, buildings, factories, and experiments, interspersed with shots of people giving talks against a red background. The video, accompanied by music from Jacques, demonstrates the creative possibilities of AI-generated video content and suggests that AI will not replace creatives but rather open new avenues for creativity and storytelling.

Source link