Skip to content

YefanZhou/dispersion-score

Repository files navigation

A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks [3DV 2021]

Yefan Zhou, Yiru Shen, Yujun Yan, Chen Feng, Yaoqing Yang

Paper (Arxiv)

An SVR model can be disposed towards recognition (classification-based) or reconstruction depending on how dispersed the training data becomes.

github_twitter_demo

Dispersion Score is a data-driven metric that is used to measure the internel mechanism of single-view 3D reconstruction network: the tendency of network to perform recognition or reconstruction. It can also be used to diagnose problems from the training data and guide the design of data augmentation schemes.

Installation

To install dispersion-score and develop locally:

  • PyTorch version >= 1.6.0
  • Python version = 3.6
conda create -n dispersion_score python=3.6
conda activate dispersion_score
git clone https://github.com/YefanZhou/dispersion-score.git
cd dispersion-score
chmod +x setup.sh 
./setup.sh

Dataset

Download provided synthetic dataset and customized ShapeNet renderings as following, or you may build synthetic dataset or build renderings yourself.

bash download/download_data.sh

Manually download ShapeNet V1 (AtlasNet version): pointclouds, renderings , and unzip the two files as following.

unzip ShapeNetV1PointCloud.zip -d ./dataset/data/
unzip ShapeNetV1Renderings.zip -d ./dataset/data/

Experiments Results

Download our trained models:

bash download/download_checkpts.sh

Experiments on Synthetic datasets:

Measure Dispersion Score (DS) and Visualize Measurements

python eval_scripts/eval_ds_synthetic.py --gpus [IDS OF GPUS TO USE]

Run the notebook to visualize the results and reproduce plots.

Model Training

You could also train models from scratch as following instead of using trained models.

python train_scripts/train_synthetic.py --gpus [IDS OF GPUS TO USE]

Experiments on ShapeNet:

Measure Dispersion Score (DS) and Visualize Measurements

# More dispersed training Images 
python eval_scripts/eval_ds_moreimgs.py --gpus [IDS OF GPUS TO USE]
# More dispersed training shapes 
python eval_scripts/eval_ds_moreshapes.py --gpus [IDS OF GPUS TO USE] 

Run the notebook to visualize the results and reproduce plots.

Model Training

You could also train models from scratch as following instead of using trained models.

python train_scripts/train_more_imgs.py --gpus [IDS OF GPUS TO USE]
python train_scripts/train_more_shapes.py --gpus [IDS OF GPUS TO USE]

The code is built on top of AtlasNet.

Citation

If you find the repository useful for your work, please cite our paper.

@INPROCEEDINGS {9665835,
author = {Y. Zhou and Y. Shen and Y. Yan and C. Feng and Y. Yang},
booktitle = {2021 International Conference on 3D Vision (3DV)},
title = {A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks},
year = {2021},
volume = {},
issn = {},
pages = {1331-1340},
keywords = {training;three-dimensional displays;image recognition;systematics;shape;training data;artificial neural networks},
doi = {10.1109/3DV53792.2021.00140},
url = {https://doi.ieeecomputersociety.org/10.1109/3DV53792.2021.00140},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {dec}
}

About

[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published