This repository provides the official codes (PyTorch implementation) for the paper "MaskLRF: Self-supervised Pretraining via Masked Autoencoding of Local Reference Frames for Rotation-invariant 3D Point Set Analysis". The paper is accepted to the IEEE Access journal.
My code has been tested on Ubuntu 22.04. I highly recommend using the Docker container "nvcr.io/nvidia/pytorch:21.09-py3", which is provided by Nvidia NGC. After launching the Docker container, run the following shell script to install the prerequisite libraries.
./prepare.sh
See DATASET.md for details.
Run the following shell script to start pretraining from scratch with the configurations used in the paper.
The pretrained parameters will be saved as "experiments/pretrain/ckpt-last.pth".
./Run_MaskLRF_pretraining.sh
You can also download the pretrained DNN parameters below.
Save ckpt-last.pth in the directory "experiments/pretrain/".
DNN model | Dataset for pretraining | Pretrained parameters |
---|---|---|
R2PT | ShapeNetCore55 | ckpt-last.pth |
Run the corresponding shell script to finetune the pretrained model and evaluate its accuracy in a downstream task.
By default, finetuning/evaluation is done in the NR/SO3 rotation setting.
A log file will be saved in the directory "experiments/".
./Run_MaskLRF_finetuning_cls.sh
./Run_MaskLRF_finetuning_fewshot.sh
./Run_MaskLRF_finetuning_partseg.sh
My code is built upon Point-MAE.