Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/85819
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGatt, Albert-
dc.contributor.authorTanti, Marc-
dc.contributor.authorMuscat, Adrian-
dc.contributor.authorPaggio, Patrizia Pagg-
dc.contributor.authorFarrugia, Reuben A.-
dc.contributor.authorBorg, Claudia-
dc.contributor.authorCamilleri, Kenneth P.-
dc.contributor.authorRosner, Michael-
dc.contributor.authorvan der Plas, Lonneke-
dc.date.accessioned2021-12-20T10:56:27Z-
dc.date.available2021-12-20T10:56:27Z-
dc.date.issued2018-
dc.identifier.citationGatt, A., Tanti, M., Muscat, A., Paggio, P., Farrugia, R. A., Borg, C., ... & Van der Plas, L. (2018). Face2text: Collecting an annotated image description corpus for the generation of rich face descriptions. International Conference on Language Resources and Evaluation, Miyazaki. 3323-3328.en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/85819-
dc.description.abstractThe past few years have witnessed renewed interest in NLP tasks at the interface between vision and language. One intensively-studied problem is that of automatically generating text from images. In this paper, we extend this problem to the more specific domain of face description. Unlike scene descriptions, face descriptions are more fine-grained and rely on attributes extracted from the image, rather than objects and relations. Given that no data exists for this task, we present an ongoing crowdsourcing study to collect a corpus of descriptions of face images taken ‘in the wild’. To gain a better understanding of the variation we find in face description and the possible issues that this may raise, we also conducted an annotation study on a subset of the corpus. Primarily, we found descriptions to refer to a mixture of attributes, not only physical, but also emotional and inferential, which is bound to create further challenges for current image-to-text methods.en_GB
dc.language.isoenen_GB
dc.publisherLRECen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectFace perceptionen_GB
dc.subjectNatural language generation (Computer science)en_GB
dc.subjectCrowdsourcingen_GB
dc.titleFace2Text : collecting an annotated image description corpus for the generation of rich face descriptionsen_GB
dc.typeconferenceObjecten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.bibliographicCitation.conferencenameInternational Conference on Language Resources and Evaluationen_GB
dc.bibliographicCitation.conferenceplaceMiyazaki, Japan, May 2018en_GB
dc.description.reviewedpeer-revieweden_GB
Appears in Collections:Scholarly Works - FacICTCCE

Files in This Item:
File Description SizeFormat 
L18-1525.pdf
  Restricted Access
316.84 kBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.