StylePeople
StylePeople
StylePeople
A Generative Model of Fullbody Human Avatars
A Generative Model of Fullbody Human Avatars
A Generative Model of Fullbody Human Avatars
Artur Grigorev*
Artur Grigorev*
Artur Grigorev*
Karim Iskakov*
Karim Iskakov*
Karim Iskakov*
Anastasia Ianina
Anastasia Ianina
Anastasia Ianina
Renat Bashirov
Renat Bashirov
Renat Bashirov
Ilya Zakharkin
Ilya Zakharkin
Ilya Zakharkin
Alexander Vakhitov
Alexander Vakhitov
Alexander Vakhitov
Victor Lempitsky
Victor Lempitsky
Victor Lempitsky
Figure 1: Style people, i.e. random samples from our generative models of human avatars. Each avatar is shown from two different viewpoints. The samples show diversity in terms of clothing and demographics. Loose clothing and hair are present.
We propose a new type of full-body human avatars, which combines parametric mesh-based body model with a neural texture. We show that with the help of neural textures, such avatars can successfully model clothing and hair, which usually poses a problem for mesh-based approaches. We also show how these avatars can be created from multiple frames of a video using backpropagation. We then propose a generative model for such avatars that can be trained from datasets of images and videos of people. The generative model allows us to sample random avatars as well as to create dressed avatars of people from one or few images.
Figure 2: Our generative architecture is based on the combination of StyleGANv2 and Neural dressing. The StyleGAN part is used to generate neural textures, which are then superimposed on SMPL-X meshes and rendered with a neural renderer. During adversarial learning, the discriminator considers a pair of images of the same person.