Skip to content

Latest commit

 

History

History
147 lines (98 loc) · 3.7 KB

README.rst

File metadata and controls

147 lines (98 loc) · 3.7 KB

MorphGAN implementation

This the implementation of the GAN pipeline and MorphGAN pipeline for the project: Rendering Alternative: Deep Image Based Sanssouci (REAL DIBS).

Requirements Installation

This application requires the following packages:

  • python=3.7
  • tensorflow==2.1.0
  • tensorflow_datasets==3.1.0
  • scikit-image
  • pygame
  • sewar
  • trimesh
  • tensorflow-graphics

Also requires the following repositories:

Run

###

GAN pipeline

To run the gan pipeline from scratch, the user should follow the following instructions:

  1. setup the data structure:

    filenames identical across folders file structure is always: data/<dataset> data/<dataset>/undistorted/test/ data/<dataset>/undistorted/test_one/ data/<dataset>/undistorted/train/

    results/render/<dataset>/paired/test/ results/render/<dataset>/paired/test_one/ results/render/<dataset>/paired/train/ results/render/<dataset>/masks/test/ results/render/<dataset>/masks/test_one/ results/render/<dataset>/masks/train/

  2. run utils/masks.py to create masks from RGBA renders

  3. create a <datasetname>_dataset.py for the new dataset

  4. add:

    'from <datasetname>_dataset import <datasetname>_dataset to data/data_manager.py

  5. create a config with make_config.py for training

  6. run:

    python train_gan.py config/<config>.json


For inference:

  1. run:
    python test_gan.py config/<config>.json

Iterate between steps 5, 6 and 7 to train over existing datasets


A working training experiment on the sullens dataset can be launched by:
python train_gan.py config/pix2pix_sullens_epochs_10000.json

MorphGAN pipeline

To run the MorphGAN pipeline from scratch, the user should follow the following instructions:

  1. setup the data structure:

    filenames identical across folders file structure is always: data/<dataset> data/<dataset>/undistorted/test/ data/<dataset>/undistorted/test_one/ data/<dataset>/undistorted/train/

    results/render/<dataset>/paired/test/ results/render/<dataset>/paired/test_one/ results/render/<dataset>/paired/train/ results/render/<dataset>/masks/test/ results/render/<dataset>/masks/test_one/ results/render/<dataset>/masks/train/

  2. run utils/masks.py to create masks from RGBA renders

  3. Run the meshoptim-realdibs pipeline

  4. make new morphed dataset (like data/sullens_morphed_dataset) where:

    morph_path: path to the morphed images output directory of 3. render_path: path to directories of 1.

  5. add:

    'from <datasetname>_dataset import <datasetname>_dataset to data/data_manager_morphed.py

  6. make config with 'make_config.py' specifying the new dataset made in 5.

  7. run:

    python train_morphgan.py config/<config>.json


For Inference:

  1. get new images, new .txt file with new camera locations and names by running blender script over new cameras.
  2. use meshoptim-realdibs pipeline to generate new morphed images for the new set of cameras
  3. make a new dataset similarly to step 4; morph_path: path to the new images of 8. render_path: path to the morphed images of 9.
  4. add: 'from <datasetname>_dataset import <datasetname>_dataset to data/data_manager_morphed.py
  5. make new config using "ckpt_path" = checkpoints of 7.
  6. run 'inference_morphgan.py' over the new config.
  7. run 'utils/make_video.py' over the output folder generated by 13.

A working training experiment on the sullens dataset can be launched by:
python train_morphgan.py config/pix2pix_sullens_morphed_0.2.0_epochs_10000.json