We used the Toyota HSR with Ubuntu 18.04 to execute this code, which lead to the results presented in
@article{langer2022where,
title={Where Does It Belong? Autonomous Object Mapping in Open-World Settings},
author={Langer, Edith and Patten, Timothy and Vincze, Markus},
journal={Frontiers in Robotics and AI},
pages={???},
year={2022},
publisher={Frontiers}
}
On the first start, run
xhost local:docker
docker-compose --profiles first_run up
This will start all the necessary containers. They are:
The .env file is used to set the correct variables for ROS_IP , ROS_MASTER_URI and the shared folder path .
Voxblox and Sparseconvnet are used to create the first full semantically segmented reconstruction of the room. After the Voxblox container started, one has to drive the robot through the room.
To save the reconstruction, execute the according command from the table_extractor statemachine on the robot, or call
rosservice call /voxblox_node/generate_mesh
If you want to start from the beginning, you can clear the progress with
rosservice call /voxblox_node/clear_map
After the reconstruction is saved, you have to call sparseconvnet with
rosservice call /sparseconvnet_ros/sparseconvnet_ros_service/execute /root/share/hsrb_result.ply
If you run
docker-compose up
only the following containers will be started:
- table_extractor
- png_to_klg
- elasticfusion
On the robot, start the sasha_run_table_extractor.sh script with
source /home/v4r/Markus_L/devel/setup.bash
rosrun table_extractor sasha_run_table_extractor.sh
This starts a tmux shell, one pane starts the mongodb database, one loads the rosparams, one starts the move_around_tables.py script and one starts the statemachine, where you can give the commands.
By pressing m and then Enter, the statemachine calls the generate_mesh rosservice and afterwards the sparseconvnet_ros_service.
By pressing c, the clear_map service is called to clear the progress from voxblox.
Press s to start the pipeline.
The pipeline will do the following steps:
- Clear database
- Fetch reconstruction file from the backpack to the robot
- Extract the planes from the reconstruction file and save them in the database
- Generate viewpoints for every extracted plane and save them in the database
- Choose a plane
- Move around a plane while staring at it and recording a rosbag, moving the bagfile to the backpack
- Extract the png files from the rosbag
- Convert the png files to a klg file
- Call ElasticFusion
To change the rosparams, edit the rosparam.yaml file in the table_extractor folder. You can change the call parameters for png_to_klg and elasticfusion there, and edit the topics that are recorded by the move_around_tables script for example.