Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104640
Title: Grounded textual entailment
Authors: Trong Vu, Hoa
Greco, Claudio
Erofeeva, Aliia
Jafaritazehjan, Somayeh
Linders, Guido
Tanti, Marc
Testoni, Alberto
Bernardi, Raffaella
Gatt, Albert
Keywords: Semantic computing
Error-correcting codes (Information theory)
Data sets
Issue Date: 2018
Publisher: Association for Computational Linguistics
Citation: Trong Vu, H., Greco, C., Erofeeva, A., Jafaritazehjan, S., Linders, G., Tanti, M. . . Gatt, A. (2018). Grounded textual entailment. Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe. 2354-2368.
Abstract: Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics. Logic-based models analyse entailment in terms of possible worlds (interpretations, or situations) where a premise P entails a hypothesis H iff in all worlds where P is true, H is also true. Statistical models view this relationship probabilistically, addressing it in terms of whether a human would likely infer H from P. In this paper, we wish to bridge these two perspectives, by arguing for a visually-grounded version of the Textual Entailment task. Specifically, we ask whether models can perform better if, in addition to P and H, there is also an image (corresponding to the relevant “world” or “situation”). We use a multimodal version of the SNLI dataset (Bowman et al., 2015) and we compare “blind” and visually-augmented models of textual entailment. We show that visual information is beneficial, but we also conduct an in-depth error analysis that reveals that current multimodal models are not performing “grounding” in an optimal fashion.
URI: https://www.um.edu.mt/library/oar/handle/123456789/104640
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Grounded_textual_entailment_2018.pdf
  Restricted Access
1.36 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.