Skip to content
/ EFG Public

An Efficient, Flexible, and General deep learning framework that retains minimal.

License

Notifications You must be signed in to change notification settings

V2AI/EFG

Repository files navigation


An Efficient, Flexible, and General deep learning framework that retains minimal. Users can use EFG to explore any research topics following project templates.

What's New

0. Benchmarking

1. Installation

1.1 Prerequisites

  • gcc 5 (c++11 or newer)
  • python >= 3.6
  • cuda >= 10.1
  • pytorch >= 1.6
# spconv
spconv_cu11{X} (set X according to your cuda version)

# waymo_open_dataset
## python 3.6
waymo-open-dataset-tf-2-1-0==1.2.0

## python 3.7, 3.8
waymo-open-dataset-tf-2-4-0==1.3.1

1.2 Build from source

git clone https://github.com/poodarchu/EFG.git
cd EFG
pip install -v -e .
# set logging path to save model checkpoints, training logs, etc.
echo "export EFG_CACHE_DIR=/path/to/your/logs/dir" >> ~/.bashrc

2. Data

2.1 Data Preparation - Waymo

# download waymo dataset v1.2.0 (or v1.3.2, etc)
gsutil -m cp -r \
  "gs://waymo_open_dataset_v_1_2_0_individual_files/testing" \
  "gs://waymo_open_dataset_v_1_2_0_individual_files/training" \
  "gs://waymo_open_dataset_v_1_2_0_individual_files/validation" \
  .

# extract frames from tfrecord to pkl
CUDA_VISIBLE_DEVICES=-1 python cli/data_preparation/waymo/waymo_converter.py --record_path "/path/to/waymo/training/*.tfrecord" --root_path "/path/to/waymo/train/"
CUDA_VISIBLE_DEVICES=-1 python cli/data_preparation/waymo/waymo_converter.py --record_path "/path/to/waymo/validation/*.tfrecord" --root_path "/path/to/waymo/val/"

# create softlink to datasets
cd /path/to/EFG/datasets; ln -s /path/to/waymo/dataset/root waymo; cd ..
# create data summary and gt database from extracted frames
python cli/data_preparation/waymo/create_data.py --root-path datasets/waymo --split train --nsweeps 1
python cli/data_preparation/waymo/create_data.py --root-path datasets/waymo --split val --nsweeps 1

2.2 Data Preparation - nuScenes

# nuScenes

dataset/nuscenes
├── can_bus
├── lidarseg
├── maps
├── occupancy
│   ├── annotations.json
│   └── gts
├── panoptic
├── samples
├── sweeps
├── v1.0-mini
├── v1.0-test
└── v1.0-trainval
# create softlink to datasets
cd /path/to/EFG/datasets; ln -s /path/to/nuscenes/dataset/root nuscenes; cd ..
# suppose that here we use nuScenes/samples images, put gts and annotations.json under nuScenes/occupancy
python cli/data_preparation/nuscenes/create_data.py --root-path datasets/nuscenes --version v1.0-trainval --nsweeps 11 --occ --seg

3. Get Started

3.1 Training & Evaluation

# cd playground/path/to/experiment/directory

efg_run --num-gpus x  # default 1
efg_run --num-gpus x task [train | val | test]
efg_run --num-gpus x --resume
efg_run --num-gpus x dataloader.num_workers 0  # dynamically change options in config.yaml

Models will be evaluated automatically at the end of training. Or,

efg_run --num-gpus x task val

4. Model ZOO

All models are trained and evaluated on 8 x NVIDIA A100 GPUs.

Waymo Open Dataset - 3D Object Detection (val - mAPH/L2)

Methods Frames Schedule VEHICLE PEDESTRIAN CYCLIST
CenterPoint 1 36 66.9/66.4 68.2/62.9 69.0/67.9
CenterPoint 4 36 70.0/69.5 72.8/69.7 72.6/71.8
Voxel-DETR 1 6 67.6/67.1 69.5/63.0 69.0/67.8
ConQueR 1 6 68.7/68.2 70.9/64.7 71.4/70.1

nuScenes - 3D Object Detection (val)

Methods Schedule mAP NDS Logs
CenterPoint 20 59.0 66.7

5. Call for contributions

EFG is currently in a relatively preliminary stage, and we still have a lot of work to do, if you are interested in contributing, you can email me at poodarchu@gmail.com.

6. Citation

@article{chen2023trajectoryformer,
  title={TrajectoryFormer: 3D Object Tracking Transformer with Predictive Trajectory Hypotheses},
  author={Chen, Xuesong and Shi, Shaoshuai and Zhang, Chao and Zhu, Benjin and Wang, Qiang and Cheung, Ka Chun and See, Simon and Li, Hongsheng},
  journal={arXiv preprint arXiv:2306.05888},
  year={2023}
}

@inproceedings{zhu2023conquer,
  title={Conquer: Query contrast voxel-detr for 3d object detection},
  author={Zhu, Benjin and Wang, Zhe and Shi, Shaoshuai and Xu, Hang and Hong, Lanqing and Li, Hongsheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={9296--9305},
  year={2023}
}

@misc{zhu2023efg,
    title={EFG: An Efficient, Flexible, and General deep learning framework that retains minimal},
    author={EFG Contributors},
    howpublished = {\url{https://github.com/poodarchu/efg}},
    year={2023}
}