logo
banner

Journals & Publications

Publications Papers

Papers

Invariant Feature Extraction for Gait Recognition Using Only One Uniform Model
Jul 24, 2017Author:
PrintText Size A A

Title: Invariant Feature Extraction for Gait Recognition Using Only One Uniform Model

 Authors: Yu, SQ; Chen, HF; Wang, Q; Shen, LL; Huang, YZ  

Author Full Names: Yu, Shiqi; Chen, Haifeng; Wang, Qing; Shen, Linlin; Huang, Yongzhen  

Source: NEUROCOMPUTING, 239 81-93; 10.1016/j.neucom.2017.02.006 MAY 24 2017  

Language: English  

Abstract: Gait recognition has been proved useful in human identification at a distance. But many variations such as view, clothing, carrying condition make gait recognition is still challenging in real applications. The variations make it is hard to extract invariant feature to distinguish different subjects. For view variation, one view transformation model can be employed to convert the gait feature from one view to another. Most existing models need to estimate the view angle first, and can Work for only one view pair. They can not convert multi-view data to one specific view efficiently. Other Variations also need some specific models to handle. We employed one deep model based on auto-encoder for invariant gait extraction. The model can synthesize gait feature in a progressive Way by stacked multi-layer auto-encoders. The unique advantage is that it can extract invariant gait feature using only one model, and the extracted feature is robust to view, clothing and carrying condition variation. The proposed method is evaluated on two large gait datasets, CASIA Gait Dataset B and SZU RGB-D Gait Dataset. The experimental results show that the proposed method can achieve state-of-the-art performance by only one uniform model. (C) 2017 Elsevier B.V. All rights reserved.  

ISSN: 0925-2312  

eISSN: 1872-8286  

IDS Number: EP9JG  

Unique ID: WOS:000397689300008

*Click Here to View Full Record