[RAL 2024] Deep Reinforcement Learning-based Large-scale Robot Exploration - - Public code and model
Note: This is a new implementation of ARiADNE ground truth critic variant. You can find our original implementation in the main branch. We reimplement the code to optimize the computing time, RAM/VRAM usage, and compatibility with ROS. The trained model can be directly tested in our ARiADNE ROS planner.
We recommend to use conda for package management. Our planner is coded in Python and based on Pytorch. Other than Pytorch, please install following packages by:
pip install scikit-image matplotlib ray tensorboard
We tested our planner in various version of these packages so you can just install the latest one.
Download this repo and go into the folder:
git clone https://github.com/marmotlab/ARiADNE.git
cd ARiADNE
Launch your conda environment if any and run:
python driver.py
The default training code requires around 8GB VRAM and 20G RAM.
You can modify the hyperparameters in parameter.py
.
parameters.py
Training parameters.driver.py
Driver of training program, maintain & update the global network.runner.py
Wrapper of the workers.worker.py
Interact with environment and collect episode experience.model.py
Define attention-based network.env.py
Autonomous exploration environment.node_manager.py
Manage and update the informative graph for policy observation.ground_truth_node_manager.py
Manage and update the ground truth informative graph for critic observation.quads
Quad tree for node indexing provided by Daniel Lindsley.sensor.py
Simulate the sensor model of Lidar.utils
Some helper functions./maps
Maps of training environments provided by Chen et al..
Yuhong Cao
Rui Zhao
Yizhuo Wang
Bairan Xiang
Guillaume Sartoretti