Clova AI Research of Naver Corp. is a South Korea based institution who has announced StarGAN v2 which is an official PyTorch implementation of its recent and popular models.
The teamwork of Yunjey Choi (Clova AI Research), Youngjung Uh (Clova AI Research), Jaejun Yoo (EPFL) and Jung-Woo Ha (Clova AI Research) on Diverse Image Synthesis for Multiple Domain- StarGAn v2 has shown performance above all other existing models. The StarGAN v2 code and pretrained models including new dataset of high-quality animal faces (AFHQ) are being hosted by the Clova AI GitHub.
StarGAN v2 has overcome the issues that bumped in the existing methods. According to the team “A good image-to-image translation model should learn a mapping between different visual domains while satisfying certain properties.” StarGAN v2 is a single framework tackling all the properties showcasing along with it the significantly improved results over the baselines. The team has showed excellent works by experimenting on CelebA-HQ and a new animal face dataset (AFHQ) that validates the superiority of their work in terms of visual quality, diversity and scalability.
The StarGAN v2 satisfies the following properties and thus stands above other works and researches in image-to-image processing and synthesis domain:
Diversity of generated images
Scalability over multiple domains
The uniqueness of this project as explained by Journalist Yuan Yuan/Editor: Michael Sarazen can be stated as:
The style code is separately generated per domain by the multi-head mapping network and style encoder.
The style space is produced by learned transformations and is inspired by NVIDIA’s StyleGAN.
The modules benefit from fully exploiting training data from multiple domains.
The team has even released a teaser video of their work.