Skip to content

A "selfie2anime" project based on StyleGAN & StyleGAN2. You can generate the customed animate faces base on your own real-world selfie.

Notifications You must be signed in to change notification settings

ZeroHackz/Animefy

 
 

Repository files navigation

Animefy yourself!

🙃 I am not sure if the model needs to be trained for longer time, I will update this when I get time later ...

example

About

A "selfie2anime" project based on StyleGAN2. You can generate your own animate faces base on real-world selfie. The whole "selfie2anime" project is based on StyleGAN2[Official code][paper][video]and layer swapping technique proposed by Justin Pinkney. And also thanks for the highly reproduceable Pytorch reimplementing styleGAN2 project by Tetratrio .

Animefy
│  README.md -------------------------------- Description of the projects.
│  latent.sh -------------------------------- The script to find latent feature of a given image.
│  run.sh ----------------------------------- The script to train the model.
│  synthesis.sh ----------------------------- The script to generate animate image without condition.
│  align_images.py -------------------------- Align the face of given images, since the given selfie image might not in the same scale.
│  blend.py --------------------------------- Blend the model after well trained.
│  generate.py ------------------------------ Generate animate image based on the latent code.
│  run_convert_from_tf.py ------------------- Convert pretrained model file in tensorflow to pytorch.
│  run_generator.py ------------------------- Generate animte images without condition (i.e. latent code), called by `synthesis.sh`.
│  run_metrics.py --------------------------- Caculate the metric of trained model.
│  run_projector.py ------------------------- Find latent feature of a given image, called by `latent.sh`.
│  run_training.py -------------------------- Train the model, called by `run.sh`.
│  requirements.txt ------------------------- Environment required, can be set up using `pip install -r requirements.txt`.
│  settings_with_pretrain.yaml -------------- Configuration when training the model.
│
├─environment
│      anime.yaml
│
├─example
│      example.png ------------------------- Example image of the process of finding latent code and generate corresponding anime image by interating.
│
├─Notebook
│      Animefy-yourself.ipynb -------------- Description notebook of the project.
│
├─projects
│  │  resize.py ---------------------------- Might be useless now, resize the selfie image to 512 × 512.
│  │
│  └─latent
│          image0000-target.png
│
└─stylegan2 -------------------------------- Core files.
    │  loss_fns.py ------------------------- Loss function.
    │  models.py --------------------------- StyleGAN2 model file
    │  modules.py -------------------------- Dependence module of StyleGAN2.
    │  project.py -------------------------- Find the latent code.
    │  train.py ---------------------------- Define class `Trainer` to train the model.
    │  utils.py ---------------------------- Utility file of model and training.
    │  __init__.py ------------------------- Using for importing.
    │
    ├─external_models
    │      inception.py -------------------- Inception module.
    │      lpips.py ------------------------ Caculate the similarity between images.
    │      __init__.py
    │
    └─metrics ------------------------------ Metric file.
            fid.py ------------------------- Fréchet Inception Distance。
            ppl.py ------------------------- Perplexity.
            __init__.py -------------------- Using for importing.

Dateset

The animate faces datast we used is here,this dataset is processed from DANBOORU2018. The dataset contains 140000 animate faces.(Baidu Drive(code:JIMD))

Installation

  • clone the repository by running
git clone https://github.com/XingruiWang/Animefy.git 
  • dependencies

    We refer to the dependencies of pixel2style2pixel, all dependencies are provided in environment/anime.yaml

Train model

  • If you want to reproduce the project, run run.sh
sh run.sh
  • Remember to change configure file settings_with_pretrain.yaml.Set your data path in data_dir. Set the checkpoint in g_file and d_file.

  • We trained our model on 4 GPUs and obtained an acceptable result after 2000 iterations (3-4 hours). Our final model was trained for 18000 iterations (around 2 days).

  • If you want to transfer the project to other dataset, like project human faces to cat faces, we recommend you to keep the pretrain model as StyleGAN2 FFHQ pretrain model and finetuning on your custom dataset (by change data_dir).

  • Since our proposal is to project human face to anime character, we freeze the G_mapping layer after loading pretrain model, to keep the ability of model of extracting the human faces features. If you just want to generate high quality animate image, you don't need to do that (set param.requires_grad = True in line 767 of run_training.py).

Model blending

  • The trained model is not enough to generate custom animate faces. Inspired by Justin Pinkney, we blended the human faces generating model and our animate faces generating model in order to keep the low resolution information of real world human face (gesture, head position and angle) and high resolution information (big eyes, small nose, hair style and etc. ) of animate faces.

  • I reimplement the blend_models.py by pytorch (here)

from blend import blend_models

G_out = blend_models("G_pretrain.pth", 
                     "G_out.pth", 
                     8, None)
G_out.save('G_blend.pth')

Pretrained model

Name Description
StyleGAN2 on FFHQ dataset (config-f 512x512) The pretrain model trained on FFHQ human dataset to generate human faces.
StyleGAN2 on animate characters Our final result to generate animate faces and also to project real-world human faces, namely G_out.pth.
StyleGAN2 on animate characters(blended) Our final result to generate animate faces and also to project real-world human faces, namely G_blend.pth.

Custom your own Animate Character

👉 You can generate your own character on the colab Notebook here (Recommanded)

Or running in terminal:

  • Download the pretrain model G_blend.pth

  • put your own pictures in ./projects/real (must be 512x512)

  • project your faces to latent layer

sh latent.sh
  • latent feature have been stored in ./projects/latent

  • run python generate.py

About

A "selfie2anime" project based on StyleGAN & StyleGAN2. You can generate the customed animate faces base on your own real-world selfie.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 95.2%
  • Python 4.8%