Title: Encoder-decoder recurrent network model for interactive character animation generation
Full Names: Wang, Yumeng; Che, Wujun; Xu, Bo
Language: English
Abstract: In this paper, we propose a generative recurrent model for human-character interaction. Our model is an encoder-recurrent-decoder network. The recurrent network is composed by multiple layers of long short-term memory (LSTM) and is incorporated with an encoder network and a decoder network before and after the recurrent network. With the proposed model, the virtual character's animation is generated on the fly while it interacts with the human player. The coming animation of the character is automatically generated based on the history motion data of both itself and its opponent. We evaluated our model based on both public motion capture databases and our own recorded motion data. Experimental results demonstrate that the LSTM layers can help the character learn a long history of human dynamics to animate itself. In addition, the encoder-decoder networks can significantly improve the stability of the generated animation. This method can automatically animate a virtual character responding to a human player.