logo
banner

Journals & Publications

Publications Papers

Papers

Conditional High-Order Boltzmann Machines for Supervised Relation Learning
Oct 21, 2017Author:
PrintText Size A A

Title: Conditional High-Order Boltzmann Machines for Supervised Relation Learning

 Authors: Huang, Y; Wang, W; Wang, L; Tan, TN

 Author Full Names: Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

 Source: IEEE TRANSACTIONS ON IMAGE PROCESSING, 26 (9):4297-4310; 10.1109/TIP.2017.2698918 SEP 2017

 Language: English

 Abstract: Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

 ISSN: 1057-7149

 eISSN: 1941-0042

 IDS Number: FA4EF

 Unique ID: WOS:000405395900004

*Click Here to View Full Record