JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation

Published in British Machine Vision Conference 2024, 2024

Download paper here

We introduce a novel method for joint expression and audio-guided talking face generation. Recent approaches either struggle to preserve the speaker identity or fail to produce faithful facial expressions. To address these challenges, we propose a NeRF-based network. Since we train our network on monocular videos without any ground truth, it is essential to learn disentangled representations for audio and expression. We first learn audio features in a self-supervised manner, given utterances from multiple subjects. By incorporating a contrastive learning technique, we ensure that the learned audio features are aligned to the lip motion and disentangled from the muscle motion of the rest of the face. We then devise a transformer-based architecture that learns expression features, capturing long-range facial expressions and disentangling them from the speech-specific mouth movements. Through quantitative and qualitative evaluation, we demonstrate that our method can synthesize high-fidelity talking face videos, achieving state-of-the-art facial expression transfer along with lip synchronization to unseen audio.

Bibtex:

@inproceedings{sun2021speech2talking, 

title={JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation}, 

author={Chakkera,Sai Tanmay Reddy and Chatziagapi,Aggelina and Samaras,Dimitris}, 

booktitle={BMVC}, 

year={2024}}

Demo

Recommended citation: S. T. R. Chakkera, A. Chatziagapi, and D. Samaras, ‘JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation’, in Brit. Mach. Vis. Conf., 2024.
Download Paper