logo
banner

Journals & Publications

Journals Publications Papers

Papers

Deep Unsupervised Learning with Consistent Inference of Latent Representations
Mar 19, 2018Author:
PrintText Size A A

Title: Deep Unsupervised Learning with Consistent Inference of Latent Representations

 Authors: Chang, JL; Wang, LF; Meng, GF; Xiang, SM; Pan, CH

 Author Full Names: Chang, Jianlong; Wang, Lingfeng; Meng, Gaofeng; Xiang, Shiming; Pan, Chunhong

 Source: PATTERN RECOGNITION, 77 438-453; 10.1016/j.patcog.2017.10.022 MAY 2018

 Language: English

 Abstract: Utilizing unlabeled data to train deep neural networks (DNNs) is a crucial but challenging task. In this paper, we propose an end-to-end approach to tackle this problem with consistent inference of latent representations. Specifically, each unlabeled data point is considered as a seed to generate a set of latent labeled data points by adding various random disturbances or transformations. Under the expectation maximization framework, DNNs can be trained in an unsupervised way by minimizing the distances between the data points with the same latent representations. Furthermore, several variants of our approach can be derived by applying regularized and sparse constraints during optimization. Theoretically, the convergence of the proposed method and its variants are fully analyzed. Experimental results show that the proposed approach can significantly improve the performance on various tasks, including image classification and clustering. Such results also indicate that our method can guide DNNs to learn more invariant feature representations in comparison with traditional unsupervised methods. (C) 2017 Elsevier Ltd. All rights reserved.

 ISSN: 0031-3203

 eISSN: 1873-5142

 IDS Number: FX6UM

 Unique ID: WOS:000426222800033

*Click Here to View Full Record