Papers
arxiv:2506.08279

Seeing Voices: Generating A-Roll Video from Audio with Mirage

Published on Jun 9
· Submitted by deepkyu on Jun 11
Authors:
,
,
,
,
,
,
,

Abstract

Mirage generates realistic video from audio inputs, integrating with speech synthesis to create compelling multimodal content through a unified, self-attention-based training approach.

AI-generated summary

From professional filmmaking to user-generated content, creators and consumers have long recognized that the power of video depends on the harmonious integration of what we hear (the video's audio track) with what we see (the video's image sequence). Current approaches to video generation either ignore sound to focus on general-purpose but silent image sequence generation or address both visual and audio elements but focus on restricted application domains such as re-dubbing. We introduce Mirage, an audio-to-video foundation model that excels at generating realistic, expressive output imagery from scratch given an audio input. When integrated with existing methods for speech synthesis (text-to-speech, or TTS), Mirage results in compelling multimodal video. When trained on audio-video footage of people talking (A-roll) and conditioned on audio containing speech, Mirage generates video of people delivering a believable interpretation of the performance implicit in input audio. Our central technical contribution is a unified method for training self-attention-based audio-to-video generation models, either from scratch or given existing weights. This methodology allows Mirage to retain generality as an approach to audio-to-video generation while producing outputs of superior subjective quality to methods that incorporate audio-specific architectures or loss components specific to people, speech, or details of how images or audio are captured. We encourage readers to watch and listen to the results of Mirage for themselves (see paper and comments for links).

Community

Paper author Paper submitter

We're revealing the magic behind Mirage with the release of our technical report.
Mirage, our omni-modal foundation model, generates expressive actors that actually look and feel human.

Mirage is uniquely set apart by its ability to generate:

  • People that don’t exist, based on uploaded audio
  • Body language and expression, directed from audio
  • The full spectrum of emotions
  • Natural skin texture, devoid of AI sheen

More video generation results at our project page

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.08279 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.08279 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.08279 in a Space README.md to link it from this page.

Collections including this paper 2