Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/22011
Title: | Where to put the image in an image caption generator |
Authors: | Tanti, Marc Gatt, Albert Camilleri, Kenneth P. |
Keywords: | Computational intelligence Image analysis Computer network architectures |
Issue Date: | 2017 |
Publisher: | Cambridge University Press |
Citation: | Tanti, M., Gatt, A., & Camilleri, K. P. (2017). Where to put the Image in an Image Caption Generator. Natural Language Engineering, 24 (3), 467–489. |
Abstract: | When a neural language model is used for caption generation, the image information can be fed to the neural network either by directly in- corporating it in a recurrent neural network { conditioning the language model by injecting image features { or in a layer following the recurrent neural network { conditioning the language model by merging the image features. While merging implies that visual features are bound at the end of the caption generation process, injecting can bind the visual features at a variety stages. In this paper we empirically show that late binding is superior to early binding in terms of di erent evaluation metrics. This suggests that the di erent modalities (visual and linguistic) for caption generation should not be jointly encoded by the RNN; rather, the multi- modal integration should be delayed to a subsequent stage. Furthermore, this suggests that recurrent neural networks should not be viewed as actu- ally generating text, but only as encoding it for prediction in a subsequent layer. |
URI: | https://www.um.edu.mt/library/oar//handle/123456789/22011 |
Appears in Collections: | Scholarly Works - InsLin |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Where_to_put_the_image_in_an_image_caption_generator(2017).pdf | 396.47 kB | Adobe PDF | View/Open |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.