Trainable Videorealistic Speech Animation
MASSACHUSETTS INST OF TECH CAMBRIDGE
Pagination or Media Count:
We describe how to create with machine learning techniques a generative, videorealistic, speech animation module. A human subject is rst recorded using a videocamera as heshe utters a predetermined speech corpus. After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subjects mouth uttering entirely novel utterances that were not recorded in the original video. The synthesized utterance is re-composited onto a background sequence which contains natural head and eye movement. The nal output is videorealistic in the sense that it looks like a video camera recording of the subject. At run time, the input to the system can be either real audio sequences or synthetic audio produced by a text-to-speech system, as long as they have been phonetically aligned. The two key contributions of this paper are 1 a variant of the multidimensional morphable model MMM to synthesize new, previously unseen mouth con gurations from a small set of mouth image prototypes and 2 a trajectory synthesis technique based on regularization, which is automatically trained from the recorded video corpus, and which is capable of synthesizing trajectories in MMM space corresponding to any desired utterance.
- Anatomy and Physiology
- Recording and Playback Devices