Audio-Driven Emotional Video Portraits

Xinya Ji
Hang Zhou
Kaisiyuan Wang
Wayne Wu
Chen Change Loy
Xun Cao
Feng Xu
Nanjing University
The Chinese University of Hong Kong
The University of Sydney
SenseTime Research
Nanyang Technological University
Tsinghua University
 
CVPR 2021
[Paper]
[Supp]
[GitHub]
Given an audio clip and a target video, our Emotional Video Portraits (EVP) approach is capable of generating emotion-controllable talking portraits and change the emotion of them smoothly by interpolating at the latent space..

Abstract

In this work, we present Emotional Video Portraits (EVP), a system for synthesizing high-quality video portraits with vivid emotional dynamics driven by audios. Specifically, we propose the Cross-Reconstructed Emotion Disentanglement technique to decompose speech into two decoupled spaces, i.e., a duration-independent emotion space and a duration dependent content space. With the disentangled features, dynamic 2D emotional facial landmarks can be deduced. Then we propose the Target-Adaptive Face Synthesis technique to generate the final high-quality video portraits, by bridging the gap between the deduced landmarks and the natural head poses of target videos. Extensive experiments demonstrate the effectiveness of our method both qualitatively and quantitatively.


Video


Code

Overview of our Emotional Video Portrait algorithm.

 [GitHub]


Paper

Audio-Driven Emotional Video Portraits.
In CVPR, 2021.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This work is supported partly by the NSFC (No.62025108), the National Key R&D Program of China 2018YFA0704000, partly by the Beijing Natural Science Foundation (JQ19015), the NSFC (No.61822111, 61727808, 61627804), the NSFJS (BK20192003), partly by Leading Technology of Jiangsu Basic Research Plan under Grant BK2019200, and partly by A*STAR through the Industry Alignment Fund - Industry Collaboration Projects Grant..