Bimodal emotion recognition
N. Sebe1, E. Bakker2, I. Cohen3, T.
Gevers1 and T. Huang4
1Faculty of Science,
University of Amsterdam, The Netherlands,
2LIACS Media Lab, Leiden University, The Netherlands, 3HP
Labs, Palo Alto,CA, USA, 4University of Illinois, Urbana-Champaign,
IL, USA
Recent technological advances have enabled human users to interact with
computers in ways previously unimaginable. Beyond the con.nes of the keyboard
and mouse, new modalities for human-computer interaction such as voice,
gesture, and force-feedback are emerging. Despite important advances,
one necessary ingredient for natural interaction is still missing - emotions.
This paper describes the challenging problem of bimodal emotion recognition
and advocates the use of probabilistic graphical models when fusing the
different modalities. We test our audio-visual emotion recognition approach
on 38 subjects with 11 HCI-related affect states. The experimental results
show that the average person-dependent emotion recognition accuracy is
greatly improved when both visual and audio information is used in classification.
Paper presented
at Measuring Behavior 2005
, 5th International Conference on Methods and Techniques
in Behavioral Research, 30 August - 2 September 2005, Wageningen, The
Netherlands.
© 2005 Noldus
Information Technology bv
|