Skip to content

VDIGPKU/BEV-MAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios

This is the official implementation of BEV-MAE.

Model

We release the pre-training weights of VoxelNet on Waymo dataset.

pre-trained 3D backbone Dataset Weights
VoxelNet Waymo (20% data) Google_drive
VoxelNet Waymo (full data) Google_drive

Our code is base on OpenPCDet (0.5 version). To use our pre-trained weights, please refer to INSTALL.md for installation and follow the instructions in GETTING_STARTED.md to train the model.

Training

See the scripts in tools/run.sh

Acknowledgements

BEV-MAE is based on OpenPCDet. It is also greatly inspired by the open-source code Occupancy-MAE.

Citation

If BEV-MAE is useful or relevant to your research, please kindly recognize our contributions by citing our paper:

@inproceedings{lin2024bevmae,
  title={BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios},
  author={Lin, Zhiwei and Wang, Yongtao and Qi, Shengxiang and Dong, Nan and Yang, Ming-Hsuan},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
  year={2024}
}

Contact Us

If you have any problem about this work, please feel free to reach us out at zwlin@pku.edu.cn.

The project is only free for academic research purposes, but needs authorization for commerce. For commerce permission, please contact wyt@pku.edu.cn.

About

[AAAI 2024] BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages