Data collection tools with active data acquisition strategy implemented for CARLA Simulator.
Download Carla version 0.9.14 here.
pip3 install -r requirements.txt
# Write following env into your bashrc or zshrc
export CARLA_ROOT=[PATH_TO_YOUR_CARLA]
export PYTHONPATH=$PYTHONPATH:$CARLA_ROOT/PythonAPI/carla/dist/[YOUR_CARLA_EGG_NAME]:$CARLA_ROOT/PythonAPI/carla/
Execute the command in the root directory:
python3 data_recorder.py
We support image data as YOLO format, lidar data as OpenPCDet format.
Execute the command in the root directory:
python format_helper.py -s {raw_data/record...}
Execute the command in the root directory to visualize lidar cloud point:
python label_tools/kitti_lidar/lidar_label_view.py -d {local_semantic_lidar.npy}
Data can be collected by just executing python3 data_recorder.py
. The configuration can be changed by modifying the files in the folder config
.
Dataset Name | Total Frames | Map |
---|---|---|
D1 | 900 | Town02 |
D1-S | 579 | Town02 |
D2 | 900 | Town02 |
V | 375 | Town03 |
Dataset Name | Vehicle | Training Time | mAP |
---|---|---|---|
D1 | 996 | 14min40s | 0.508 |
D1-S | 996 | 11min32s | 0.492 |
D2 | 1835 | 16min07s | 0.647 |
Thank you for your interest in contributing to this project! Contributions are highly appreciated and help improve the project for everyone. If you have any questions or need further assistance, please feel free to open an issue.
@article{Lai2023ActiveDA,
title={Active Data Acquisition in Autonomous Driving Simulation},
author={Jianyu Lai and Zexuan Jia and Boao Li},
journal={ArXiv},
year={2023},
volume={abs/2306.13923}
}
To validate the correctness of the strategy, we'd better try multiple algorithms:
- YOLO
- CenterPoint