Dubbing for Everyone: Data-Efficient Visual Dubbing using Neural Rendering Priors

University of Bath,

Dubbing for Everyone allows for effecient visual dubbing with as little as 4s of data.


Visual dubbing is the process of generating lip motions of an actor in a video to synchronise with given audio. Visual dubbing allows video-based media to reach global audiences. Recent advances have made progress towards realising this goal but have not been able to produce an approach suitable for mass adoption.

Existing methods are split into either person-generic or person-specific models. Person-specific models produce results almost indistinguishable from reality but rely on long training times using large single-person datasets. Person-generic works have allowed for the visual dubbing of any video to any audio without further training, but these fail to capture the person-specific characteristics and often suffer from visual artefacts. Our methodology, based on data-efficient neural rendering priors, overcomes the limitations of existing approaches. Our pipeline consists of learning a deferred neural rendering prior network and actor-specific adaptation using neural textures. This method allows for high-quality visual dubbing with just a few seconds of data, that enables video dubbing for any actor - from A-list celebrities to background actors.

We show that we achieve state-of-the-art in terms of visual quality and recognisability both quantitatively, and qualitatively through two user studies. Our prior learning and adaptation method generalises to limited data better and is more scalable than existing person-specific models. Our experiments on a real-world, limited data scenarios find that our model is preferred over all existing methodologies.



The pipeline of our method. We first apply preprocessing to our dataset to obtain 3D reconstructions, tightly and stably cropped to the face. We next obtain person-generic audio-to-expression and neural rendering models using multiple subjects. Given a new subject, we then finetune both models for the given subject.


  author    = {Saunders, Jack and Namboodiri, Vinay},
  title     = {Dubbing for Everyone: Data-Efficient Visual Dubbing using Neural Rendering Priors},
  journal   = {arxiv},
  year      = {2024},