Welcome to Pengyu Hong's Homepage

Home Resume Research Publication

An Integrated Framework for Face Modeling, Facial Motion Analysis and Synthesis

An integrated framework is presented to systematically address: (a) face deformation modeling; (b) model-based facial motion analysis; (c) speech to facial coarticulation modeling.

  • A set of Motion Units is used as the quantitative visual representation of facial deformations. The same visual representation is also used by face animation and facial motion analysis. Motion Units are learned from a set of labeled real facial shapes. Arbitrary facial deformation can be approximated by a linear combination of Motion Units, which are weighted by Motion Unit parameters. We can animate a face model by adjusting Motion Unit parameters.
  • A 2D Motion Unit based facial motion tracking algorithm is presented. MUs are used as the high-level knowledge by the tracking algorithm to obtain robust facial motion analysis results. The tracking results are represented as Motion Unit parameter sequence. The tracking results can be used directly for face animation or be used for other training/recognition purpose.
  • A set of facial motion tracking results and the corresponding audio tracks are collected as the audio-visual database. Two real-time audio-to-visual mappings are trained using the collected audio-visual database. The algorithms map audio features to Motion Unit parameters, which are used to animate face models via Motion Unit compatible face animation techniques.