This repository contains libraries and tools for a depth-sensor based model for workspace monitoring and an interactive Augmented Reality (AR) User Interface (UI) for safe HRC. The AR UI is implemented on two different hardware: a projector-mirror setup and a wearable AR gear (HoloLens). The repository contains the following modules:
- a projection-based user interface
- a head-mounted AR (Hololens) user interface
- a depth-based safety system
- ROS Melodic
- OpenCV (2.4.x, using the one from the official Ubuntu repositories is recommended)
- PCL (1.8.x, using the one from the official Ubuntu repositories is recommended)
- iai_kinect2
- ROS-industrial Universal Robot and UR modern driver
In order to replicate the research work the following hardware are required:
- UR5 from Universal Robot family (tested on CB3 and UR software 3.5.4)
- Standard 3LCD projector
- Flat worktable
- Hololens
- Computer
- Install ROS. Instructions for Ubuntu 18.04
- Install iai_kinect2:
- Install ROS-industrial for UR and UR modern driver. Follow the instructions on the webpages.
- Clone this repository into your catkin workspace, install the dependencies and build it:
cd ~/catkin_ws/src/ git clone https://github.com/Herrandy/HRC-TUNI.git cd HRC-TUNI rosdep install -r --from-paths . cd ~/catkin_ws catkin_make -DCMAKE_BUILD_TYPE="Release"
-
Make wired connection between the robot and the computer, instructions for Ubuntu.
-
Test connection:
ping <robot_ip>
. Check that data packages are recieved. -
Start the robot interface:
roslaunch ur_modern_driver ur5_bringup.launch robot_ip:=<robot_ip>
. Check that /joint_states is getting updated.
For debugging and development download URSim and install:
- Extract files:
tar xvf <URSIM_TAR>
- Run
./install.sh
- Run inside the extracted URSIM_folder:
chmod u=rwx,g=rx,o=r
- Install and wwitch to java 8:
sudo apt install openjdk-8-jdk && sudo update-alternatives --config java
- Start simulator:
sudo ./start-ursim.sh
$ roslaunch ur_modern_driver ur5_bringup.launch robot_ip:=192.168.125.100
$ roslaunch kinect2_bridge kinect2_bridge.launch max_depth:=2.0 publish_tf:=true
$ rosrun projector projector_interface.py
$ rosrun projector handle_interaction_markers.py
$ rosrun robot dashboard_client.py
$ roslaunch safety_model detect.launch safety_map_scale:=100 cluster_tolerance:=0.005 min_cluster_size:=200 anomalies_threshold:=20 cloud_diff_threshold:=0.02 viz:=false
This is the reference implementation for the paper:
AR-based interaction for human-robot collaborative manufacturing A. Hietanen, R. Pieters, M. Lanz, J. Latokartano, J.-K. Kämäräinen
If you find this code useful in your work, please consider citing:
@article{hietanen2020ar,
title={Ar-based interaction for human-robot collaborative manufacturing},
author={Hietanen, Antti and Pieters, Roel and Lanz, Minna and Latokartano, Jyrki and K{\"a}m{\"a}r{\"a}inen, Joni-Kristian},
journal={Robotics and Computer-Integrated Manufacturing},
volume={63},
pages={101891},
year={2020},
publisher={Elsevier}
}