Project Page | Paper | Video Demo
In this work, we present an end-to-end software-hardware framework that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We first design our framework to achieve real-time performance on the robot system, guarantee configuration optimization, and concentrate on code reusability. We then mathematically model and utilize our transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we systematically test the proposed framework and method on the Baxter robot with two 7-DOF arms and a four-wheel mobility base. The results show that the robot achieves real-time performance while executing other tasks (map building, localization, navigation, object detection, arm moving, and grasping) simultaneously with available hardware like Intel onboard GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application.
- Remote Control Co-bot using wireless connection (TCP/IP)
- Basic Control: Tuck/Untuck, Enable/Disable
- Joint Teaching
- World Position Monitor and Transformation
- Base control: Move Left/Right/Backward/Forward, Turn Left/Right
- Hands control
- Python download and execute on Robot
- Run distributed computers
- Perform Machine Learning tasks using Onboard Intel GPU
- ROS Indigo Or Noetic
- Ubuntu 16.04 or 20.04
- Python 2.7, 3.7 or above
- C/C++
- OpenVINO
@inproceedings{dang2023perfc,
title={Perfc: An efficient 2d and 3d perception software-hardware framework for mobile cobot},
author={Dang, Tuan and Nguyen, Khang and Huber, Manfred},
booktitle={The International FLAIRS Conference Proceedings},
volume={36},
year={2023}
}