Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/104143
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jongejan, Bart | - |
dc.contributor.author | Paggio, Patrizia | - |
dc.contributor.author | Navarretta, Costanza | - |
dc.date.accessioned | 2022-11-30T10:03:20Z | - |
dc.date.available | 2022-11-30T10:03:20Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Jongejan, B., Paggio, P., & Navarretta, C. (2017). Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. In Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), Denmark. 10-17. | en_GB |
dc.identifier.isbn | 9789176854235 | - |
dc.identifier.uri | https://www.um.edu.mt/library/oar/handle/123456789/104143 | - |
dc.description.abstract | This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy. | en_GB |
dc.language.iso | en | en_GB |
dc.publisher | Linköping University Electronic Press. | en_GB |
dc.rights | info:eu-repo/semantics/openAccess | en_GB |
dc.subject | Corpora (Linguistics) | en_GB |
dc.subject | Visual perception | en_GB |
dc.subject | Head-Driven Phrase Structure Grammar | en_GB |
dc.subject | Speech acts (Linguistics) -- Data processing | en_GB |
dc.subject | Body language -- Research | en_GB |
dc.subject | Automatic speech recognition | en_GB |
dc.title | Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk | en_GB |
dc.type | conferenceObject | en_GB |
dc.rights.holder | The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder. | en_GB |
dc.bibliographicCitation.conferencename | The 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016) | en_GB |
dc.bibliographicCitation.conferenceplace | Copenhagen, Denmark. 29-30/09/2016. | en_GB |
dc.description.reviewed | peer-reviewed | en_GB |
Appears in Collections: | Scholarly Works - InsLin |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Classifying_head_movements_in_video-recorded_conversations_based_on_movement_velocity,_acceleration_and_jerk(2017).pdf | 1.8 MB | Adobe PDF | View/Open |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.