Skip to content

Jinec98/MAE3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MAE-3D

This is the source code for our proposed Masked Autoencoders in 3D Point Cloud Representation Learning (MAE3D).

Requirements

python >= 3.7

pytorch >= 1.7.0

numpy

scikit-learn

einops

h5py

tqdm

and if you are first time to run "pointnet2_ops_lib", you need

pip install pointnet2_ops_lib/.

Datasets

The main datasets we used in our project are ShapeNet and ModelNet40, and you can download them in: ShapeNet and ModelNet40.

and then you need to put them at "./data"

Evaluate

We can get an accuracy of 93.4% on the ModelNet40. The pre-trained model can be found in here.

You need to move the "model_cls.t7" to "./checkpoints/mask_ratio_0.7/exp_shapenet55_block/models", then you can simply restore our model and evaluate on ModelNet40 by

python main_cls.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --eval True

Pre-training: Point Cloud Completion

You should download ShapeNet dataset first, and then simply run

python main_pretrain.py --exp_name exp_shapenet55_block --mask_ratio 0.7

If you want to visualize all reconstructed point cloud (will spend a lot of time), you could run as

python main_pretrain.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --visualize True

Fine-tuning: supervised classification

You should download ModelNet dataset first, and then simply run

python main_cls.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --pretrained True --finetune True

Linear classifier: unsupervised classification

You should download ModelNet dataset first, and then simply run

python main_cls.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --pretrained True --linear_classifier True 

Citation

If you find our work useful, please consider citing:

@article{jiang2023masked,
    title={Masked autoencoders in 3d point cloud representation learning},
    author={Jiang, Jincen and Lu, Xuequan and Zhao, Lizhi and Dazaley, Richard and Wang, Meili},
    journal={IEEE Transactions on Multimedia},
    year={2023},
    publisher={IEEE}
}    

About

[IEEE TMM] The official implementation of MAE3D

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published