logo
banner

Journals & Publications

Journals Publications Papers

Papers

Adversarial Image Generation by Combining Content and Style
Nov 23, 2021Author:
PrintText Size A A

Authors

Liu, Songyan; Zhao, Chaoyang; Gao, Yunze; Wang, Jinqiao; Tang, Ming

Abstract

Images can be considered as the combination of two parts:  the content and the style. The authors' approach can leverage this property by extracting a certain unique style from the reference images and combining it to generate images with new contents.  With a well-defined style feature extraction module, they propose a novel framework to generate images with various styles and the same content. To train the style specific image generation model efficiently, a double-cycle training strategy is proposed: they input two natural-content pairs simultaneously, extract their style features, and exchange them twice to obtain the reconstruction of the input natural images. What is more, they apply the triplet margin loss to the style feature extracted from the images before and after style exchange and an adversarial discriminator to force the style-exchanged images to be real. They perform experiments on license-plate image, Chinese characters, and shoes or handbags images generating, obtain photo-realistic results and remarkably improve the corresponding supervised recognition task.

 

Publisher

IET Image Processing

Research Area

Engineering, Electrical & Electronic; Computer Science, Artificial Intelligence; Imaging Science & Photographic Technology Computer Science, Artificial Intelligence, Engineering, Electrical & Electronic

DOI: 10.1049/iet-ipr.2019.0103