Cross-Modal Retrieval via Deep and Bidirectional Representation Learning
Aug 05, 2016Author:
Title: Cross-Modal Retrieval via Deep and Bidirectional Representation Learning Authors: He, YH; Xiang, SM; Kang, CC; Wang, J; Pan, CH Author Full Names: He, Yonghao; Xiang, Shiming; Kang, Cuicui; Wang, Jian; Pan, Chunhong Source: IEEE TRANSACTIONS ON MULTIMEDIA, 18 ( 7):1363-1377 ; 10.1109/TMM.2016.2558463 JUL 2016 Language: English Abstract: Cross-modal retrieval emphasizes understanding inter-modality semantic correlations, which is often achieved by designing a similarity function. Generally, one of the most important things considered by the similarity function is how to make the cross-modal similarity computable. In this paper, a deep and bidirectional representation learning model is proposed to address the issue of image-text cross-modal retrieval. Owing to the solid progress of deep learning in computer vision and natural language processing, it is reliable to extract semantic representations from both raw image and text data by using deep neural networks. Therefore, in the proposed model, two convolution-based networks are adopted to accomplish representation learning for images and texts. By passing the networks, images and texts are mapped to a common space, in which the cross-modal similarity is measured by cosine distance. Subsequently, a bidirectional network architecture is designed to capture the property of the cross-modal retrieval-the bidirectional search. Such architecture is characterized by simultaneously involving the matched and unmatched image-text pairs for training. Accordingly, a learning framework with maximum likelihood criterion is finally developed. The network parameters are optimized via backpropagation and stochastic gradient descent. A great deal of experiments are conducted to sufficiently evaluate the proposed method on three publicly released datasets: IAPRTC-12, Flickr30k, and Flickr8k. The overall results definitely show that the proposed architecture is effective and the learned representations have good semantics to achieve superior cross-modal retrieval performance. ISSN: 1520-9210 eISSN: 1941-0077 IDS Number: DR2RW Unique ID: WOS:000379752600012