logo
banner

Journals & Publications

Publications Papers

Papers

A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions
Oct 21, 2017Author:
PrintText Size A A

Title: A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions

 Authors: Cui, CK; Bian, GB; Hou, ZG; Zhao, J; Zhou, H

 Author Full Names: Cui, Chengkun; Bian, Gui-Bin; Hou, Zeng-Guang; Zhao, Jun; Zhou, Hao

 Source: IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, 11 (4):889-899; 10.1109/TBCAS.2017.2699189 AUG 2017 Language: English

 Abstract: In this study, a multimodal fusion framework based on three different modal biosignals is developed to recognize human intentions related to lower limb multi-joint motions which commonly appear in daily life. Electroencephalogram (EEG), electromyogram (EMG) and mechanomyogram (MMG) signals were simultaneously recorded from twelve subjects while performing nine lower limb multi-joint motions. These multimodal data are used as the inputs of the fusion framework for identification of different motion intentions. Twelve fusion techniques are evaluated in this framework and a large number of comparative experiments are carried out. The results show that a support vector machine-based three-modal fusion scheme can achieve average accuracies of 98.61%, 97.78% and 96.85%, respectively, under three different data division forms. Furthermore, the relevant statistical tests reveal that this fusion scheme brings significant accuracy improvement in comparison with the cases of two-modal fusion or only a single modality. These promising results indicate the potential of the multimodal fusion framework for facilitating the future development of human-robot interaction for lower limb rehabilitation.

 ISSN: 1932-4545

 eISSN: 1940-9990

 IDS Number: FB8RQ

 Unique ID: WOS:000406406900015

*Click Here to View Full Record