Skip to content

justinjohn0306/AMLD2020-Dirty-GANcing

 
 

Repository files navigation

Dirty-Gancing

This repository contains the code and models for the Dirty GANcing Visium workshop at AMLD 2020.

Workshop setup

Running the code of this repository is, for the most part, computionally expensive and it is best to have GPU access. For the workshop users, we have prepared the notebook so that you can easily load them into Google Colab and enjoy the free GPU services offered by Google.

  1. Open Google Colab and sign in with your Google account or create a new one.

  2. Click on File and then Open notebook.... Select the Github tab, look for VisiumCH to find this repository and select the notebook you would like to run.

  3. Once you are in the notebook, click on Runtime, Change runtime type and then select GPU as hardware accelerator.

  4. Finally, click on Connect and you should be ready!

Requirements

For users outside the workshop who would like to experiment more with the repo, here are a few installation steps:

Create a virtual environnement and install the dependencies:

python -m venv env
source env/bin/activate
pip install -r requirements_1st.txt
pip install -r requirements_2nd.txt

Basic usage

  1. Put your source video (an .mp4 file) in ./data/sources/{source_folder_name}/ and then run:
python src/data_preparation/prepare_source.py -s data/sources/{source_folder_name}
  1. Put your target video (an .mp4 file) in ./data/targets/{target_folder_name}/ and then run:
python src/data_preparation/prepare_target.py -s data/targets/{target_folder_name}
  1. Train the pose2vid model by running:
python src/GANcing/train_pose2vid.py -t data/targets/{target_folder_name} -r {run_name}

The training can be monitored:

  • Using Tensorboard by running tensorboard --logdir ./checkpoints
  • In a basic html webpage by running python -m http.server in ./checkpoints/{run_name}/web
  1. If the training is stopped for any reason, you can restart it from the last checkpoint by using the same command as in step 3.

  2. Once the training is finished or you stopped it, you can perform the Pose Normalization with:

python src/data_postprocessing/normalization.py -s data/sources/{source_folder_name} -t data/targets/{target_folder_name}
  1. Finally, you can perform the pose transfer with:
python src/data_postprocessing/transfer.py -s data/sources/{source_folder_name} -r {run_name}
  1. You can visualize the results as a gif by running:
python src/data_postprocessing/make_gif.py -s data/sources/{source_folder_name} -r results/{run_name}

Extended notice

This extended instructions are for users willing to add Face Enhancement to their model

  1. Start by creating the necessary files with:
python src/face_enhancer/prepare_face_enhancer_data.py -t data/targets/{target_folder_name} -r {run_name}
  1. Train the Face Enhancer with:
python src/face_enhancer/train_face_enhancer -t data/targets/{target_folder_name} -r {run_name}
  1. Once you are satisfied with the training results, you can stop it and perform the face_enhancement with:
python src/face_enhancer/run_face_enhancer.py -s data/sources/{source_folder_name} -t data/targets/{target_folder_name} -r {run_name}

This will create a new results folder in ./results/{run_name}_enhanced, which you can use as a substitue in make_gif.py

Citations

This repo is an adaptation of the paper Everybody Dance Now. If you intend to use it for anything, please consider citing the original authors in your paper:

@article{chan2018everybody,
  title={Everybody dance now},
  author={Chan, Caroline and Ginosar, Shiry and Zhou, Tinghui and Efros, Alexei A},
  journal={arXiv preprint arXiv:1808.07371},
  year={2018}
}

The code in this repo is adapted and corrected for the AMLD workshop, and is orginally based on CUHKSZ-TQL's repo.

It also borrows heavily from :

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 51.6%
  • Python 48.4%