logo
banner

Journals & Publications

Publications Papers

Papers

Video Super-Resolution based on Spatial-Temporal Recurrent Residual Networks
Jul 10, 2018Author:
PrintText Size A A

 Title: Video Super-Resolution based on Spatial-Temporal Recurrent Residual Networks  

Authors: Yang, WH; Feng, JS; Xie, GS; Liu, JY; Guo, ZM; Yan, SC  

Author Full Names: Yang, Wenhan; Feng, Jiashi; Xie, Guosen; Liu, Jiaying; Guo, Zongming; Yan, Shuicheng  

Source: COMPUTER VISION AND IMAGE UNDERSTANDING, 168 79-92; SI 10.1016/j.cviu.2017.09.002 MAR 2018  

Language: English  

Abstract: In this paper, we propose a new video Super-Resolution (SR) method by jointly modeling intra-frame redundancy and inter-frame motion context in a unified deep network. Different from conventional methods, the proposed Spatial-Temporal Recurrent Residual Network (STR-ResNet) investigates both spatial and temporal residues, which are represented by the difference between a high resolution (HR) frame and its corresponding low resolution (LR) frame and the difference between adjacent HR frames, respectively. This spatial-temporal residual learning model is then utilized to connect the intra-frame and inter-frame redundancies within video sequences in a recurrent convolutional network and to predict HR temporal residues in the penultimate layer as guidance to benefit estimating the spatial residue for video SR. Extensive experiments have demonstrated that the proposed STR-ResNet is able to efficiently reconstruct videos with diversified contents and complex motions, which outperforms the existing video SR approaches and offers new state-of-the-art performances on benchmark datasets.  

ISSN: 1077-3142  

eISSN: 1090-235X  

IDS Number: GB6NK  

Unique ID: WOS:000429185700007

*Click Here to View Full Record