Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104335
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPaggio, Patrizia-
dc.contributor.authorAgirrezabal, Manex-
dc.contributor.authorJongejan, Bart-
dc.contributor.authorNavarretta, Costanza-
dc.date.accessioned2022-12-12T13:17:07Z-
dc.date.available2022-12-12T13:17:07Z-
dc.date.issued2018-
dc.identifier.citationPaggio, P., Jongejan, B., Agirrezabal, M., & Navarretta, C. (2018, October). Detecting head movements in video-recorded dyadic conversations. Proceedings of the 20th International Conference on Multimodal Interaction: Adjunct, Colorad. 1-6.en_GB
dc.identifier.isbn9781450360029-
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/104335-
dc.description.abstractThis paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and acoustic features. The visual features include velocity, acceleration, and jerk values associated with head movements, while the acoustic ones are pitch and intensity measurements from the co-occuring speech. We present the results obtained by training and testing a number of classifiers on manually annotated data from two conversations. The best performing classifier, a Multilayer Perceptron trained using all the features, obtains 0.75 accuracy and outperforms the mono-modal baseline classifier.en_GB
dc.language.isoenen_GB
dc.publisherAssociation for Computing Machineryen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectAutomatic speech recognitionen_GB
dc.subjectHuman-computer interactionen_GB
dc.subjectModality (Linguistics)en_GB
dc.subjectVisual communication -- Digital techniquesen_GB
dc.subjectFace perceptionen_GB
dc.subjectConversation analysisen_GB
dc.subjectDialogue analysisen_GB
dc.titleDetecting head movements in video-recorded dyadic conversationsen_GB
dc.typeconferenceObjecten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.bibliographicCitation.conferencenameInternational Conference on Multimodal Interaction : Adjuncten_GB
dc.bibliographicCitation.conferenceplaceBoulder, Colorado. 16-20/10/2018.en_GB
dc.description.reviewedpeer-revieweden_GB
dc.identifier.doi10.1145/3281151.3281152-
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Detecting_head_movements_in_video-recorded_dyadic_conversations(2018).pdf
  Restricted Access
109.94 kBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.