Skip to content

Official repository for "RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation"

Notifications You must be signed in to change notification settings

CVLAB-Unibo/RefRec

Repository files navigation

RefRec

Official repository for "RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation"

[Project page] [Paper]

Authors

Adriano Cardace - Riccardo Spezialetti - Pierluigi Zama Ramirez - Samuele Salti - Luigi Di Stefano

Requirements

We rely on several libraries: Pytorch-lightning, Weight & Biases, Hesiod

To run the code, please follow the instructions below.

  1. install required dependencies
python -m venv env
source env/bin/activate
python -m pip install --upgrade pip
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
  1. Install pytorchEMD following https://github.com/daerduoCarey/PyTorchEMD and daerduoCarey/PyTorchEMD#6 for latest versions of Torch

Download and load datasets on W&B server (Registration required)

Reqeuest dataset access at https://drive.google.com/file/d/14mNtQTPA-b9_qzfHiadUIc_RWvPxfGX_/view?usp=sharing.

The dataset is the same provided by the original authours at https://github.com/canqin001/PointDAN. For convenience we provide a preprocessed version used in this work. To train the reconstruction network, we need to merge two dataset. Then, load all the required dataset to the wandb server:

mkdir data
unzip PointDA_aligned.zip -d data/
cd data
cp modelent modelnet_scannet 
rsync -av scannet modelnet_scannet
./load_data.sh

Training

To train modelnet->scannet, simply execute the following command:

./train_pipeline_m2sc.sh

About

Official repository for "RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published