Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104595
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTanti, Marc-
dc.contributor.authorAbdilla, Shaun-
dc.contributor.authorMuscat, Adrian-
dc.contributor.authorBorg, Claudia-
dc.contributor.authorFarrugia, Reuben A.-
dc.contributor.authorGatt, Albert-
dc.date.accessioned2022-12-21T11:08:43Z-
dc.date.available2022-12-21T11:08:43Z-
dc.date.issued2022-
dc.identifier.citationTanti, M., Abdilla, S., Muscat, A., Borg, C., Farrugia, R. A., & Gatt, A. (2022). Face2Text revisited : improved data set and baseline results. Workshop on People in Vision, Language, and the Mind, Marseille. 41-47.en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/104595-
dc.description.abstractCurrent image description generation models do not transfer well to the task of describing human faces. To encourage the development of more human-focused descriptions, we developed a new data set of facial descriptions based on the CelebA image data set. We describe the properties of this data set, and present results from a face description generator trained on it, which explores the feasibility of using transfer learning from VGGFace/ResNet CNNs. Comparisons are drawn through both automated metrics and human evaluation by 76 English-speaking participants. The descriptions generated by the VGGFace-LSTM + Attention model are closest to the ground truth according to human evaluation whilst the ResNet-LSTM + Attention model obtained the highest CIDEr and CIDEr-D results (1.252 and 0.686 respectively). Together, the new data set and these experimental results provide data and baselines for future work in this area.en_GB
dc.language.isoenen_GB
dc.publisherEuropean Language Resources Association (ELRA)en_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectNatural language generation (Computer science)en_GB
dc.subjectFace perceptionen_GB
dc.subjectVisual perceptionen_GB
dc.titleFace2Text revisited : improved data set and baseline resultsen_GB
dc.typeconferenceObjecten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.bibliographicCitation.conferencenameWorkshop on People in Vision, Language, and the Minden_GB
dc.bibliographicCitation.conferenceplaceMarseille, France. 20/06/2022.en_GB
dc.description.reviewedpeer-revieweden_GB
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Face2Text_revisited_improved_data_set_and_baseline_results_2022.pdf
  Restricted Access
1.04 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.