Technology

DeepMind Audio-to-Video — Latest Technology

DeepMind Audio-to-Video
Written by prodigitalweb

In this article, we are going to discuss DeepMind Audio-to-Video (V2A) technology that helps to synchronize audiovisual generation. In order to produce rich soundscapes, this technology combines video pixels with natural language text prompts.

Dream Machine, Sora, Veo, and Kling are some video generation models that are continuously getting advanced and giving users the opportunity to make videos from text prompts. But most of these systems produce silent videos. Google DeepMind is currently working on a new large language model as it is well aware of the problem. With the help of the model, it is possible to create dialogues as well as soundtracks for videos. 

About The DeepMind Audio-to-Video (V2A) Technology:

Google introduced the Veo text-to-video model at the Google I/O 2024 event. The V2A technology supports the Veo model. Besides, you can use the technology to add dialogue and dramatic music to match the video tone. Moreover, in order to add realistic sound effects, you can use this technology, which is capable of working with traditional footage; for instance silent films and archival material.

It is capable of creating an unlimited number of soundtracks for any video. Additionally, it comes with an optional ‘positive prompt’ and ‘negative prompt.’ You can use the prompts to tune the output according to your desire. Besides, it uses SynthID technology to watermark the created audio to ensure that it is authentic as well as original.

This technology uses sound description as input. Besides, it takes help of a diffusion model that is trained on sounds, dialogue transcripts, and videos. But the training of the model was not done on many videos. Therefore, sometimes the output may become distorted. Google is not going to release this technology to the public anytime soon in order to prevent misuse.

How Does DeepMind Audio-to-Video Work?

Researchers have experimented with autoregressive, as well as several diffusion approaches as they intend to find the most scalable AI architecture. According to the researchers, when they followed the diffusion approaches to generate audio, they got the best results in synchronizing video and audio information.

First, the DeepMind Audio-to-Video (V2A) system encodes a video input and then compresses it. Then, the audio is refined by the diffusion model from random noise. Natural language prompts and visual input guide the procedure so that it becomes possible to create synchronized audio that aligns closely with the prompt. At last, you get the audio output as an audio waveform combined with the video data.

Researchers included additional information in the training process, like AI-generated annotations containing spoken dialogue’s detailed sound descriptions and transcripts so that they are capable of guiding the model to produce particular sounds and create top-quality audio.

This technology is trained on audio, video, and extra annotations. As a result, DeepMind Audio-to-Video (V2A) technology can associate audio events with different visual scenes and respond to the information which transcripts or annotations give.

The Possibilities Are Endless:

This technology can be used not only to add sound effects to silent videos, but also it is possible to use this technology to create soundtracks for historical footage or educational documentaries. Moreover, it can generate audio descriptions for those people who are visually impaired.

Training the AI for Accuracy:

Google DeepMind trained the technology on a huge dataset, supplementary annotations, as well as encompassing videos & audio so that it can have essential knowledge and understanding. In this case, annotations work as detailed captions that are used to describe spoken dialogue and the sounds that are available in the videos. This technology is trained comprehensively to build a strong association between visuals and particular sounds.

Enhanced Creative Control:

V2A technology has the potential to produce countless soundtracks for any video input. In this case, you can define a ‘positive prompt’ to guide the output toward the sound you want to have, or you can define a ‘negative prompt’ to guide away sounds you don’t want to have.

Because of this flexibility, you can gain additional control over the audio output of this technology. That means, you can test continuously with various audio outputs and choose the most suitable one.

Limitations of the DeepMind Model:

You should know that it is only a research preview and has not been released yet, like many other projects from Google. According to Google, there are some safety problems and limitations that need to be fixed first.

The quality of audio output relies on the quality of video input, distortions, and artifacts in a video that are outside of the training distribution of the model. That’s why you can experience a massive drop in audio quality. The company is working to lip-synch for any video with speech. However, after attempting to do so, it didn’t always give accurate results, and produced an uncanny valley effect.

The Bottom Line:

The DeepMind Audio-to-Video or V2A technology helps to merge video visual cues with text prompts so that it can create top-quality sound & audio. The target of this technology is to transform how users make and experience AI-generated videos. To do so, it adds the most suitable dialogue and the most realistic sound effects as well as dramatic music in a video.

The combination of Veo and V2A will help you to enhance your video both visually and audibly. In addition, the technology can add sound to any old archival footage as well as silent films. Moreover, it is able to add sound to the modern videos that are generated with Veo. However, in order to prevent any potential misuse, Google may not release this technology to the public anytime soon.

About the author

prodigitalweb