Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104143
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJongejan, Bart-
dc.contributor.authorPaggio, Patrizia-
dc.contributor.authorNavarretta, Costanza-
dc.date.accessioned2022-11-30T10:03:20Z-
dc.date.available2022-11-30T10:03:20Z-
dc.date.issued2017-
dc.identifier.citationJongejan, B., Paggio, P., & Navarretta, C. (2017). Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. In Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), Denmark. 10-17.en_GB
dc.identifier.isbn9789176854235-
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/104143-
dc.description.abstractThis paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.en_GB
dc.language.isoenen_GB
dc.publisherLinköping University Electronic Press.en_GB
dc.rightsinfo:eu-repo/semantics/openAccessen_GB
dc.subjectCorpora (Linguistics)en_GB
dc.subjectVisual perceptionen_GB
dc.subjectHead-Driven Phrase Structure Grammaren_GB
dc.subjectSpeech acts (Linguistics) -- Data processingen_GB
dc.subjectBody language -- Researchen_GB
dc.subjectAutomatic speech recognitionen_GB
dc.titleClassifying head movements in video-recorded conversations based on movement velocity, acceleration and jerken_GB
dc.typeconferenceObjecten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.bibliographicCitation.conferencenameThe 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016)en_GB
dc.bibliographicCitation.conferenceplaceCopenhagen, Denmark. 29-30/09/2016.en_GB
dc.description.reviewedpeer-revieweden_GB
Appears in Collections:Scholarly Works - InsLin



Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.