Learning spatiotemporal models of facial expressions

M. Pantic, I. Patras and M.F. Valstar

Delft University of Technology, Delft, The Netherlands

Machine understanding of facial expressions could revolutionize human-machine interaction technologies and fields as diverse as security, behavioral science, medicine, and education. Consequently, computer-based recognition of facial expressions has become an active research area.

Most systems for automatic analysis of facial expressions attempt to recognize a small set of ‘universal’ emotions such as happiness and anger. Recent psychological studies claim, however, that facial expression interpretation in terms of emotions is culture dependent and may even be person dependent. To allow for rich and sometimes subtle shadings of emotion that humans recognize in a facial expression, context-dependent (e.g. user- and task-dependent) recognition of emotions from images of faces is needed.

We propose a case-based reasoning system capable of classifying facial expressions (given in terms of facial muscle actions) into the emotion categories learned from the user. The utilized case base is a dynamic, incrementally self-organizing event-content-addressable memory that allows fact retrieval and evaluation of encountered events based upon the user preferences and the generalizations formed from prior input.

Two systems for automatic recognition of facial muscle actions (i.e., Action Units, AUs) in face video will be presented as well. One of these uses temporal templates as the data representation and a combined k-Nearest-Neighbor and rule-based classifier as the recognition engine. Temporal templates are 2D representations of motion history, that is, they picture where and when motion in the input image sequence has occurred. The other system exploits particle filtering to track 15 facial points in an input face-profile video and performs facial-behavior temporal-dynamics recognition using temporal rules.

The systems have been trained and tested using two different databases: the Cohn-Kanade facial expression database and our own web-based MMI facial expression database. The recognition results achieved by the proposed systems demonstrated rather high concurrent validity with human coding.


Paper presented at Measuring Behavior 2005 , 5th International Conference on Methods and Techniques in Behavioral Research, 30 August - 2 September 2005, Wageningen, The Netherlands.

© 2005 Noldus Information Technology bv