Intonation patterns in autistic and non-autistic children

F.B.A. Naber1, G. de Krom2, S.H.N. Willemsen-Swinkels1, H. van Engeland1

1 Department of Child and Adolescent Psychiatry, University Hospital Utrecht
and Rudolf Magnus Institute for Neurosciences, Utrecht University, Utrecht, The Netherlands
2 Department of Computer Science and Humanities, Utrecht University, Utrecht, The Netherlands

 

An impairment in communication is a marked characteristic of people with autism. When speech does develop, it often sounds ‘different’ from the speech of non-autistic people. However, little is known about what exactly distinguishes the speech of people with autism from the speech of other people. Speech has several parameters like intonation, duration and intensity. These parameters of speech can be measured and diagnosed by specialized equipment. The present study examined the differences between speech of autistic children and non-autistic children. Speech of autistic children is often discribed in the literature as ‘monotonous’ and ‘flat’. We therefore hypothesize that intonation is deviant in people with autism.

To investigate the difference in the speech of autistic and non-autistic children, a number of utterances from both groups were compared. The frequency characteristics of intonation patterns were measured using speech analysing software. The parameter of frequency was chosen for study since developmentally it appears to stabilize first. The present investigation focuses on simple declarative subject-verb-object utterances, produced spontaneously under controlled conditions. Frequency measurements were obtained using a pitch meter and oscillomink tracing. In a separate experiment trained human listeners were asked to evaluate the intonation in the utterances produced by autistic and non-autistic children. Their task was to label utterances as deviant or not. The same utterances from both groups of children that were diagnosed by the speech analysing software were diagnosed by trained people. To make sure these people were not distracted by the meaning of the utterances themselves, the utterances were resynthesized keeping the original intonation, while converting meaningful sounds to nonsense syllables. During presentation of the stimulus the utterance was written on a computer screen. In this way we were ensured that the listeners knew which utterance contour to expect, while not being distracted by other characteristics then intonation. The results of both experiments will be discussed.


Poster presented at Measuring Behavior '98, 2nd International Conference on Methods and Techniques in Behavioral Research, 18-21 August 1998, Groningen, The Netherlands

© 1998 Noldus Information Technology b.v.