Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104347
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPaggio, Patrizia-
dc.contributor.authorNavarretta, Costanza-
dc.contributor.authorJongejan, Bart-
dc.date.accessioned2022-12-12T14:58:11Z-
dc.date.available2022-12-12T14:58:11Z-
dc.date.issued2017-
dc.identifier.citationPaggio, P., Navarretta, C., & Jongejan, B. (2017, April). Automatic identification of head movements in video-recorded conversations: can words help? Proceedings of the Sixth Workshop on Vision and Language (pp. 40-42).en_GB
dc.identifier.isbn9781945626517-
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/104347-
dc.description.abstractHead movements are the most frequent gestures in face-to-face communication, and important for feedback giving (Allwood, 1988; Yngve, 1970; Duncan, 1972), and turn management (McClave, 2000).Their automatic recognition has been addressed by many multimodal communication researchers (Heylen et al., 2007; Paggio and Navarretta, 2011; Morency et al., 2007). The method for automatic head movement annotation described in this paper is implemented as a plugin to the freely available multimodal annotation tool ANVIL (Kipp, 2004), using OpenCV (Bradski and Koehler, 2008), combined with a command line script that performs a number of file transformations and invokes the LibSVM software (Chang and Lin, 2011) to train and test a support vector classifier. Successively, the script produces a new annotation in ANVIL containing the learned head movements. The present method builds on (Jongejan, 2012) by adding jerk to the movement features and by applying machine learning. In this paper we also conduct a statistical analysis of the distribution of words in the annotated data to understand if word features could be used to improve the learning model. Research aimed at the automatic recognition of head movements, especially nods and shakes, has addressed the issue in essentially two different ways. Thus a number of studies use data in which the face, or a part of it, has been tracked via various devices and typically train HMM models on such data (Kapoor and Picard, 2001; Tan and Rong, 2003; Wei et al., 2013). The accuracy reported i these studies is in the range 75-89%. Other studies, on the contrary, try to identify head movements from raw video material using computer video techniques (Zhao et al., 2012; Morency et al., 2005). Different results are obtained depending on a number of factors such as video quality, lighting conditions, whether the movements are naturally occurring or rehearsed. The best results so far are probably those in (Morency et al., 2007), where an LDCRF model achieves an accuracy from 65% to 75% for a false positive rate of 20-30% and outperforms earlier SVM and HMM models. Our work belongs to the latter strand of research in that we also work with raw video data.en_GB
dc.language.isoenen_GB
dc.publisherThe Association for Computational Linguisticen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectBody language -- Researchen_GB
dc.subjectSpeech and gestureen_GB
dc.subjectFacial expression -- Data processingen_GB
dc.subjectConversation analysisen_GB
dc.subjectSpeech processing systemsen_GB
dc.subjectModality (Linguistics)en_GB
dc.subjectMachine learningen_GB
dc.titleAutomatic identification of head movements in video-recorded conversations : can words help?en_GB
dc.typeconferenceObjecten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.bibliographicCitation.conferencenameThe 6th Workshop on Vision and Languageen_GB
dc.bibliographicCitation.conferenceplaceValencia, Spain. 04/04/2017.en_GB
dc.description.reviewedpeer-revieweden_GB
dc.identifier.doi10.18653/v1/W17-2006-
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Automatic_identification_of_head_movements_in_video-recorded_conversations_Can_words_help(2017).pdf
  Restricted Access
82.69 kBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.