OpenOcc is an open source 3D occupancy prediction codebase implemented with PyTorch.
-
Multiple Benchmarks Support.
We support training and evaluation on different benchmarks including nuScenes LiDAR Segmentation, SurroundOcc, OpenOccupancy, and 3D Occupancy Prediction Challenge. You can even train with sparse lidar supervision and evaluate with dense annotations. 😝
-
Extendable Modular Design.
We design our pipeline to be easily composable and extendable. Feel free to explore other combinations like TPVDepth, VoxelDepth, or TPVFusion with simple modifications. 😉
Status | Name | Description |
---|---|---|
✅ | ImagePointWrapper | nuScenes LiDAR Segmentation |
⭕ | SurroundOcc | |
✅ | NuScenes3DOcc | OpenOccupancy |
✅ | NuScenes3DOPC | 3D Occupancy Prediction Challenge |
Status | Name | Description |
---|---|---|
✅ | TPVDepthLSSLifter | Use estimated depth distribution to lift image features to the voxel space (LSS). |
✅ | TPVPlainLSSLifter | Uniformly put image features on the corresponding ray (MonoScene). |
Status | Name | Description |
---|---|---|
✅ | TPVDepthLSSLifter, TPVPlainLSSLifter | Perform pooling to obtain TPV features. |
⭕ | Perform pooling to obtain BEV features. |
Status | Name | 3D Scene Representation | Description |
---|---|---|---|
✅ | TPVQueryLifter | TPV | Use deformable cross-attention to update TPV queries |
⭕ | BEV | Use deformable cross-attention to update BEV queries | |
⭕ | Voxel | Use deformable cross-attention to update Voxel queries |
Status | Name | Description |
---|---|---|
✅ | TPVFormerEncoder | Use self-attention to aggregate features |
✅ | TPVConvEncoder | Use 2D convolution to aggregate features |
⭕ | Use 3D convolution to aggregate features |
Status | Name | Description |
---|---|---|
✅ | CELoss | Cross-entropy loss |
✅ | LovaszSoftmaxLoss | Lovasz-softmax loss |
Coming soon.
-
Create conda environment with python version 3.8
-
Install pytorch and torchvision with versions specified in requirements.txt
-
Follow instructions in https://mmdetection3d.readthedocs.io/en/latest/getting_started.html#installation to install mmcv-full, mmdet, mmsegmentation, mmdet3d with versions specified in requirements.txt
-
Install timm, numba and pyyaml with versions specified in requirements.txt
-
Install cuda extensions.
python setup.py develop
- Download pretrain weights and put them in ckpts/
# ImageNet-1K pretrained ResNet50, same as torchvision://resnet50
https://cloud.tsinghua.edu.cn/f/3d0cea3f6ac24e019cea/?dl=1
- Create soft link from data/nuscenes to your_nuscenes_path. The dataset should be organized as follows:
TPVFormer/data
nuscenes - downloaded from www.nuscenes.org
lidarseg
maps
samples
sweeps
v1.0-trainval
nuscenes_infos_train.pkl
nuscenes_infos_val.pkl
- Download train/val pickle files and put them in data/ nuscenes_infos_train.pkl https://cloud.tsinghua.edu.cn/f/ede3023e01874b26bead/?dl=1 nuscenes_infos_val.pkl https://cloud.tsinghua.edu.cn/f/61d839064a334630ac55/?dl=1
- Train TPVFormer for lidar segmentation task.
bash launcher.sh config/tpvformer/tpvformer_lidarseg_dim128_r50_800.py out/tpvformer_lidarseg_dim128_r50_800
- Train TPVConv with PlainLSSLifter for lidar segmentation task.
bash launcher.sh config/tpvconv/tpvconv_lidarseg_dim384_r50_800_layer10.py out/tpvconv_lidarseg_dim384_r50_800_layer10
- Train TPVConv with DepthLSSLifter for lidar segmentation task.
bash launcher.sh config/tpvconv/tpvconv_lidarseg_dim384_r50_800_layer10_depthlss.py out/tpvconv_lidarseg_dim384_r50_800_layer10_depthlss
There are only two steps to launch experiments on High-Flyer AI Platform.
-
Create soft link from hfai_nuscenes_path to data/nuscenes
-
Download nuScenes-lidarseg-all-v1.0.tar from nuscenes.org, and extract files to data/lidarseg
-
Download maps.tar.gz from https://cloud.tsinghua.edu.cn/f/a74a0dd52bb9459699f2/?dl=1, and extract files to data/maps
-
The final data/ directory should be organized as follows.
OpenOcc/data
nuscenes
lidarseg
lidarseg
v1.0-mini
v1.0-trainval
v1.0-test
maps
*.png
nuscenes_infos_train.pkl
nuscenes_infos_val.pkl
Simply add --hfai to your shell command to launch experiments on High-Flyer AI Platform.