MultiStyleGAN: Multiple One-shot Face Stylizations using a Single GAN
Viraj Shah
Svetlana Lazebnik
[Paper]
[GitHub]
Our MultiStyleGAN method can stylize any input face image to multiple reference styles simultaneously by fine-tuning a single pre-trained generator while requiring only one image example of each style. Above results show stylizations of input images (left column) produced by a single generator fine-tuned on 12 reference styles (top row).

Abstract

Image stylization aims at applying a reference style to arbitrary input images. A common scenario is one-shot stylization, where only one example is available for each reference style. A successful recent approach for one-shot stylization is JoJoGAN, which fine-tunes a pre-trained StyleGAN2 generator on a single style reference image. However, it cannot generate multiple stylizations without fine-tuning a new model for each style separately. In this work, we present a MultiStyleGAN method that is capable of producing multiple different stylizations at once by fine-tuning a single generator. The key component of our method is a learnable Style Transformation module that takes latent codes as input and learns linear mappings to different regions of the latent space to produce distinct codes for each style, resulting in a multistyle space. Our model inherently mitigates overfitting since it is trained on multiple styles, hence improving the quality of stylizations. Our method can learn upwards of 12 image stylizations at once, bringing upto 8x improvement in training time. We support our results through user studies that indicate meaningful improvements over existing methods.



[Slides]

Code

Code will be made public soon!

 [GitHub]


Paper and Supplementary Material

V. Shah, S. Lazebnik
MultiStyleGAN: Multiple One-shot Face Stylizations using a Single GAN
(hosted on ArXiv pre-print)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.