Google simplifies the process of creating short videos from your photos with VDIM.

by

in

1. Google’s new AI model VDIM can create a seamless animation from two images, filling in the gaps to make a live photo.
2. VDIM uses diffusion models to turn still images into a video, capturing motion and dynamics to create fluid motion.
3. Potential uses for VDIM include video restoration, improving old movies with missing or damaged frames, but its effectiveness has not been tested outside of the research team yet.

Google recently unveiled VDIM, a new AI model developed by Google’s research division DeepMind. This model can take two images and create a seamless animation that resembles a live photo, filling in all the shots in between using AI to make the video. While currently only a research preview, this technology could one day become a common feature in smartphone photography.

VDIM works by turning still images into video through a diffusion model, using the two input images as the first and last frame. It creates a low-resolution version of the final video and refines it through a series of diffusion models to capture motion and dynamics. This information is then upscaled and improved in a higher resolution step to match the input images more closely and create a smooth motion.

One potential application of VDIM is video restoration, where it can help clean up old family movies or films with broken frames. By recreating the motion between two clean frames, VDIM can restore the original quality of the video. Although not yet available for public use, example video clips shared by Google DeepMind demonstrate the capabilities of VDIM, such as turning still images into fluid motion sequences like a box cart race or a person swinging on a swing.

Overall, VDIM shows promise as a tool for AI video creation and restoration. With further development and implementation into live software, VDIM could revolutionize the way videos are created and restored, particularly for preserving and enhancing old footage.

Source link