Unimore logo AImageLab

Towards Cycle-Consistent Models for Text and Image Retrieval

Abstract: Cross-modal retrieval has been recently becoming an hot-spot research, thanks to the development of deeply-learnable architectures. Such architectures generally learn a joint multi-modal embedding space in which text and images could be projected and compared. Here we investigate a different approach, and reformulate the problem of cross-modal retrieval as that of learning a translation between the textual and visual domain. In particular, we propose an end-to-end trainable model which can translate text into image features and vice versa, and regularizes this mapping with a cycle-consistency criterion. Preliminary experimental evaluations show promising results with respect to ordinary visual-semantic models.


Citation:

Cornia, Marcella; Baraldi, Lorenzo; Rezazadegan Tavakoli, Hamed; Cucchiara, Rita "Towards Cycle-Consistent Models for Text and Image Retrieval" Computer Vision – ECCV 2018 Workshops, Munich, Germany, 8-14 September 2018, 2019 DOI: 10.1007/978-3-030-11018-5_58

 not available

Paper download: