Efficient Human 3D Localization and Free Space Segmentation for Human-Aware Robots in Warehouse Facilities
This is the code accompanying the submission described below.
Journal: Frontiers in Robotics and AI - Robot Vision and Artificial Perception
Research topic: Enhanced Human Modeling in Robotics for Socially-Aware Place Navigation
Abstract: Real-time prediction of human location combined with the capability to perceive obstacles is crucial for socially-aware navigation in robotics. Our work focuses on localizing humans in the world and predicting the free space around them by incorporating other static and dynamic obstacles. We propose a multi-task learning strategy to handle both tasks, achieving this goal with minimal computational demands. We use a dataset captured in a typical warehouse environment by mounting a perception module consisting of a Jetson Xavier AGX and a Intel L515 LiDAR camera on a MiR100 mobile robot. Our method, which is built upon prior works in the field of human detection and localization demonstrates improved results in difficult cases that are not tackled in other works, such as human instances at a close distance or at the limits of the field of view of the capturing sensor. We further extend this work by using a lightweight network structure and integrating a free space segmentation branch that can independently segment the floor space without any prior maps or 3D data, relying instead on the characteristics of the floor. In conclusion, our method presents a lightweight and efficient solution for predicting human 3D location and segmenting the floor space for low-energy consumption platforms, tested in an industrial environment.
For any questions or comments, please feel free to reach out:
- Dimitrios Arapis dimara@dtu.dk or dtai@novonordisk.com