logo
banner

Journals & Publications

Journals Publications Papers

Papers

Cross-Modal Subspace Learning for Fine-Grained Sketch-based Image Retrieval
Mar 19, 2018Author:
PrintText Size A A

Title: Cross-Modal Subspace Learning for Fine-Grained Sketch-based Image Retrieval

 Authors: Xu, P; Yin, QY; Huang, YY; Song, YZ; Ma, ZY; Wang, L; Xiang, T; Kleijn, WB; Guo, J

 Author Full Names: Xu, Peng; Yin, Qiyue; Huang, Yongye; Song, Yi-Zhe; Ma, Zhanyu; Wang, Liang; Xiang, Tao; Kleijn, W. Bastiaan; Guo, Jun

 Source: NEUROCOMPUTING, 278 75-86; SI 10.1016/j.neucom.2017.05.099 FEB 22 2018

 Language: English

 Abstract: Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are insufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research. (C) 2017 Elsevier B.V. All rights reserved.

 ISSN: 0925-2312

 eISSN: 1872-8286

 IDS Number: FU6LV

 Unique ID: WOS:000423965000009

*Click Here to View Full Record