This repo contains the official code for the paper "A Generative Framework for Image-based Editing of Material Appearance using Perceptual Attributes" (CGF 2022). Project page: https://perso.liris.cnrs.fr/johanna.delanoy/2022_materials_generative_editing/index.html
- Python 3.6+
- PyTorch 1.0+
- tensorboardX 1.6+
- torchsummary
- tqdm
- Pillow
- easydict
- pytorch-lightning (for normal prediction)
Download the trained weights for the attributes Glossy and Metallic and put them into experiments/
(keeping the directory structure).
Download the trained weights for the normal prediction net and put them in pix2normal/checkpoints/
.
test_network.py
allows to launch the network on one image and a given attribute (it also does the normal prediction)
Use:
test_network.py INPUT_IMAGE ATTR_VAL ATTRIBUTE OUTPUT_IMAGE
edit the image INPUT_IMAGE with ATTRIBUTE set to ATTR_VAL. The trained weights should be in experiments/final_step1_ATTRIBUTE/checkpoints/G_final.pth.tar
and experiments/final_step2_ATTRIBUTE/checkpoints/G_final.pth.tar
.
Example use:
test_network.py test_images/XXX.png 1.0 glossy test_glossy_1.png
test_network.py test_images/XXX.png 0.0 glossy test_glossy_0.png
test_network.py test_images/XXX.png 1.0 metallic test_metallic_1.png
test_network.py test_images/XXX.png 0.0 metallic test_metallic_0.png
Note: It creates a temporary image test_normals.png in the current folder.
agents
: The main training files (architecture, optimisation scheme) are inagents
.trainingModule
is an abstract agent that regroup some basic training functions (similar to Pytorch lighting abstraction. Most of the optimization/training procedure is isMaterialEditNet
(the most simple one), the others build on it by just changing a few key functions (group the inputs, define the nets):MaterialEditNet
: architecture of G1 without the normals, contains most of the code (equivalent to faderNet)MaterialEditNet_with_normals
: G1MaterialEditNet_with_normals_2steps
: G2
models
: code of the networksdatasets
: code to read the datasetsutils
: various utilities
Data:
configs
: configuration files to launch the trainings or testexperiments
: snapshots of experimentstest_images
: snapshots of experiments
Normal prediction:
pix2normal
: code of the networknormal_inference.py
to get the normal prediction for one image
Download the training dataset.
The folder configs
contains the configuration files used to train the network as in the paper.
Train G1 : python train.py train_materialEditNet_withnormals.yaml
Train G2 (after G1 is trained) : python train.py train_materialEditNet_2steps.yaml
Parameters that might require to be changed: data_root
(path to dataset), attributes
(attribute to edit), g1_checkpoint
for G2 (path to trained G1), checkpoint
(to load weights, for test or resuming training)