The CMU/Pitt Automated Facial Image Analysis System

T. Kanade1 and J.F. Cohn2

1The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
2
Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA

Both the configuration and the timing of facial actions are important in emotion expression and recognition. To investigate the timing and configuration of facial actions, our interdisciplinary group of behavioral and computer scientists developed and applied a computer-vision based approach, the CMU/Pitt Automated Facial Image Analysis (AFA) System. AFA is capable of automatically recognizing facial action units and analyzing their timing in facial behavior.

The latest version of the system is based on Active Appearance Models (AAMs). AAMs are generative, parametric models and consist of a shape component and an appearance component. The shape component is a triangulated mesh that deforms in response to changes in the parameters corresponding to a face undergoing both rigid motion (head pose variation) and non-rigid motion (expression). The appearance component of the AAM is an image of the face, which itself can vary under the control of the parameters. As the parameters are varied, the appearance varies so as to model effects such as the emergence of furrows and wrinkles and the visibility of the teeth as the mouth opens.

Traditional AAMs are actually ‘2D’ in the sense that rigid head motion and non-rigid facial motion are confounded in the 2D-mesh shape model. To address these problems, we use an extension to AAMs that augments the usual 2D mesh model with an actual 3D shape model, thus separately and explicitly modeling the 3D rigid motion of the head and 3D non-rigid facial expression into two disjoint sets of parameters. This advancement allows us to extract and separate the pose, 3D shape deformation, and appearance change of the face, which are then input to the facial action recognizer.

In initial testing, this version of the system has demonstrated concurrent validity with human-observer based facial expression recognition and both human-observer and EMG based analysis of timing.


Paper presented at Measuring Behavior 2005 , 5th International Conference on Methods and Techniques in Behavioral Research, 30 August - 2 September 2005, Wageningen, The Netherlands.

© 2005 Noldus Information Technology bv