Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/104335
Title: | Detecting head movements in video-recorded dyadic conversations |
Authors: | Paggio, Patrizia Agirrezabal, Manex Jongejan, Bart Navarretta, Costanza |
Keywords: | Automatic speech recognition Human-computer interaction Modality (Linguistics) Visual communication -- Digital techniques Face perception Conversation analysis Dialogue analysis |
Issue Date: | 2018 |
Publisher: | Association for Computing Machinery |
Citation: | Paggio, P., Jongejan, B., Agirrezabal, M., & Navarretta, C. (2018, October). Detecting head movements in video-recorded dyadic conversations. Proceedings of the 20th International Conference on Multimodal Interaction: Adjunct, Colorad. 1-6. |
Abstract: | This paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and acoustic features. The visual features include velocity, acceleration, and jerk values associated with head movements, while the acoustic ones are pitch and intensity measurements from the co-occuring speech. We present the results obtained by training and testing a number of classifiers on manually annotated data from two conversations. The best performing classifier, a Multilayer Perceptron trained using all the features, obtains 0.75 accuracy and outperforms the mono-modal baseline classifier. |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/104335 |
ISBN: | 9781450360029 |
Appears in Collections: | Scholarly Works - InsLin |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Detecting_head_movements_in_video-recorded_dyadic_conversations(2018).pdf Restricted Access | 109.94 kB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.