Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/47322
Title: | PAGAN : video affect annotation made easy |
Authors: | Melhart, David Liapis, Antonios Yannakakis, Georgios N. |
Keywords: | Computer software Human-computer interaction Video tapes -- Editing Motion pictures -- Editing |
Issue Date: | 2019 |
Publisher: | Association for the Advancement of Affective Computing |
Citation: | Melhart, D., Liapis, A., & Yannakakis, G. N. (2019). PAGAN : video affect annotation made easy. Proceedings of the International Conference on Affective Computing and Intelligent Interaction, Cambridge. |
Abstract: | How could we gather affect annotations in a rapid, unobtrusive, and accessible fashion? How could we still make sure that these annotations are reliable enough for data-hungry affect modelling methods? This paper addresses these questions by introducing PAGAN, an accessible, general-purpose, online platform for crowdsourcing affect labels in videos. The design of PAGAN overcomes the accessibility limitations of existing annotation tools, which often require advanced technical skills or even the on-site involvement of the researcher. Such limitations often yield affective corpora that are restricted in size, scope and use, as the applicability of modern data-demanding machine learning methods is rather limited. The description of PAGAN is accompanied by an exploratory study which compares the reliability of three continuous annotation tools currently supported by the platform. Our key results reveal higher inter-rater agreement when annotation traces are processed in a relative manner and collected via unbounded labelling. |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/47322 |
Appears in Collections: | Scholarly Works - InsDG |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Pagan_video_affect_annotation_made_easy_2019.pdf Restricted Access | 1.88 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.