Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104143
Title: Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk
Authors: Jongejan, Bart
Paggio, Patrizia
Navarretta, Costanza
Keywords: Corpora (Linguistics)
Visual perception
Head-Driven Phrase Structure Grammar
Speech acts (Linguistics) -- Data processing
Body language -- Research
Automatic speech recognition
Issue Date: 2017
Publisher: Linköping University Electronic Press.
Citation: Jongejan, B., Paggio, P., & Navarretta, C. (2017). Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. In Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), Denmark. 10-17.
Abstract: This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.
URI: https://www.um.edu.mt/library/oar/handle/123456789/104143
ISBN: 9789176854235
Appears in Collections:Scholarly Works - InsLin



Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.