This repository contains the accompanying code for the paper Learning Goal-Oriented Non-Prehensile Pushing in Cluttered Scenes by N. Dengler, D. Großklaus and M. Bennewitz submitted for IROS, 2022. you can find the paper at http://arxiv.org/abs/2203.02389
Step 1: Clone the repository
cd
git clone https://github.com/NilsDengler/cluttered-pushing.gitStep 2: Create a virtual environment
cd cluttered-pushing
conda env create -f environment.yml -n <env_name>
conda activate <env_name>Step 3: Install the package
Dependencies:
cd cluttered-pushing/push_gym/push_gym/utils/Lazy_Theta_with_optimization_any_angle_pathfinding
mkdir build && cd build
cmake ..
makePackage:
cd cluttered-pushing/push_gym
pip install -e .Change directory to cluttered-pushing/Networks/RL/scripts
cd cluttered-pushing/RL-
To train an RL-agent, customize the parameters given in
scripts/parametes.yaml. -
Set the
train: Trueinscripts/parametes.yaml. -
Check or change the network's hyperparameter in
scripts/train_agent_script. -
A VAE model is required for training, please refere to VAE Readme to train a VAE model or download already trained models.
-
To start training:
python run_agent.py
- To evaluate a trained agent, set
train: Falseinscripts/parametes.yaml. - per default the testing uses
log_dir_name: "../Logs/example_agent/model_test/"as specified inscripts/parametes.yaml. Please note, that this is an example agent and not the agent used to reproduce the results of the paper. - Testing:
python scripts/run_agent.py
- To run the Baseline by Krivic and Piater, set
train: Falseandtest_baseline: Trueinscripts/parametes.yaml. - Testing:
python scripts/run_agent.py
For more information, please refer the README in cluttered-pushing/RL.