logo
banner

Journals & Publications

Journals Publications Papers

Papers

Multi-Perspective Cost-Sensitive Context-Aware Multi-Instance Sparse Coding and Its Application to Sensitive Video Recognition
Jan 15, 2016Author:
PrintText Size A A

Title: Multi-Perspective Cost-Sensitive Context-Aware Multi-Instance Sparse Coding and Its Application to Sensitive Video Recognition

 Authors: Hu, WM; Ding, XM; Li, B; Wang, JC; Gao, Y; Wang, FS; Maybank, S

 Author Full Names: Hu, Weiming; Ding, Xinmiao; Li, Bing; Wang, Jianchao; Gao, Yan; Wang, Fangshi; Maybank, Stephen Source: IEEE TRANSACTIONS ON MULTIMEDIA, 18 (1):76-89; 10.1109/TMM.2015.2496372 JAN 2016

 Language: English

 Abstract:

With the development of video-sharing websites, P2P, micro-blog, mobile WAP websites, and so on, sensitive videos can be more easily accessed. Effective sensitive video recognition is necessary for web content security. Among web sensitive videos, this paper focuses on violent and horror videos. Based on color emotion and color harmony theories, we extract visual emotional features from videos. A video is viewed as a bag and each shot in the video is represented by a key frame which is treated as an instance in the bag. Then, we combine multi-instance learning (MIL) with sparse coding to recognize violent and horror videos. The resulting MIL-based model can be updated online to adapt to changing web environments. We propose a cost-sensitive context-aware multi-instance sparse coding (MI-SC) method, in which the contextual structure of the key frames is modeled using a graph, and fusion between audio and visual features is carried out by extending the classic sparse coding into cost-sensitive sparse coding. We then propose a multi-perspective multi-instance joint sparse coding (MI-J-SC) method that handles each bag of instances from an independent perspective, a contextual perspective, and a holistic perspective. The experiments demonstrate that the features with an emotional meaning are effective for violent and horror video recognition, and our cost-sensitive context-aware MI-SC and multi-perspective MI-J-SC methods outperform the traditional MIL methods and the traditional SVM and KNN-based methods.

 ISSN: 1520-9210

 eISSN: 1941-0077

 IDS Number: CZ5JW

 Unique ID: WOS:000367139700008