Skip to content

satoshi-ikehata/Universal-PS-CVPR2022

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Universal-PS-CVPR2022

Official Pytorch Implementation of Universal Photometric Stereo Network using Global Lighting Contexts (CVPR2022)

Satoshi Ikehata, "Universal Photometric Stereo Network using Global Contexts", CVPR2022

project site project site project site

Prerequisites

  • Python3
  • torch
  • tensorboard
  • cv2
  • timm
  • tqdm

Tested on:

  • Windows11, Python 3.10.3, Pytorch 1.11.0, CUDA 11.3
    • GPU: Nvidia RTX A6000 (48GB)

Prepare dataset

All you need for running the universal photometric stereo network is shading images and a binary object mask. The object could be illuminated under arbitrary lighting sources but shading variations should be sufficient (weak shading variations may result in poor results).

In my implementation, all training and test data must be formatted like this:

 YOUR_DATA_PATH
  ├── A [Suffix:default ".data"]
  │   ├── mask.png
  │   ├── [Prefix (default:"0" (Train), "L" (Test))] imgfile1
  │   ├── [Prefix (default:"0" (Train), "L" (Test))] imgfile2
  │   └── ...
  └── B [Suffix:default ".data"]
      ├── mask.png
      ├── [Prefix (default:"0" (Train), "L" (Test))] imgfile1
      ├── [Prefix (default:"0" (Train), "L" (Test))] imgfile2
      └── ...

For more details, please see my real dataset at project page. You can change the configuration (e.g., prefix, suffix) at source\modules\config.py.

All masks in our datasets were computed using the software by Konstantin.

Download pretrained model

Checkpoints of the network parameters (The full configuration in the paper) are available at here

To use pretrained models, extract them as

  YOUR_CHECKPOINT_PATH
  ├── *.pytmodel
  ├── *.optimizer
  ├── *.scheduler
  └── ...

Running the test

If you don't prepare dataset by yourself, please use some sample dataset from here

For running test, please run main.py as

python source/main.py --session_name session_test --mode Test --test_dir YOUR_DATA_PATH --pretrained YOUR_CHECKPOINT_PATH

Results will be put in ouput/session_name. You will find normal maps of the canonical resolution and input resolution.

Running the training

For running training, please run main.py as:

python source/main.py --session_name session_train --mode Train --training_dir YOUR_DATA_PATH

or if you want to perform both training and test, instead use this:

python source/main.py --session_name session_train_test --mode TrainAndTest --training_dir YOUR_DATA_PATH --test_dir YOUR_DATA_PATH

The default hyperparameters are described in source/main.py.

The trainind data (PS-Wild) can be download from here.

License

This project is licensed under the GPL License - see the LICENSE file for details

About

Official Pytorch Implementation of Universal Photometric Stereo Network using Global Lighting Contexts (CVPR2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages