Gaze Dialogue Model system for iCub Humanoid Robot
- clone repository
git clone git@github.com:NunoDuarte/GazeDialogue.git
- start with the controller App (have a look at Structure to understand the pipeline of GazeDialogue)
cd controller
- install dependencies for controller App in Dependencies
- build
mkdir build
ccmake .
make -j
- install the dependencies for detection App in Dependencies
- install the dependencies for connectivity App in Dependencies (optional only for real iCub)
- Jump to Setup for the first tests of the GazeDialogue pipeline
For controller App follow instructions in icub website:
- YARP (tested on v2.3.72)
- iCub (tested on v1.10)
$ git clone https://github.com/robotology/ycm.git -b v0.11.3
$ git clone https://github.com/robotology/yarp.git -b v3.4.0
$ git clone https://github.com/robotology/icub-main.git -b v1.17.0
- OpenCV (tested on v3.4.1 and v3.4.17)
- OpenCV can be with or without CUDA, but we do recommend to install OpenCV with CUDA (tested on CUDA-8.0, CUDA-11.2, and CUDA-11.4). Please follow the official OpenCV documentation.
Install the requirements. We recommend installing Anaconda virtual environment
pip3 install -r requirements.txt
utils
package is from Tensorflow Object Detection API (follow the instructions to install it). Then add it to your path
cd tensorflow/models/research
export PYTHONPATH=$PYTHONPATH:$(pwd)/slim
echo $PYTHONPATH
export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/object_detection
pylsl
needs liblsl. Either install in /usr/ or add the filepath specified by an environment variable named PYLSL_LIB
export PYLSL_LIB=/path/to/liblsl.so
This is send the communication of PupilLabs to the detection App which then send to the iCub (YARP)
- LSL - LabStreamingLayer (tested on 1.12)
- YARP (tested on v2.3.72)
- PupilLabs - Pupil Capture (tested on v1.7.42)
- Pupil ROS plugin
Test detection App (pupil_data_test)
- go to detection app
cd detection
- run detection system offline
python3 main_offline.py
You should see a window of a video output appear. The detection system is running on the PupilLabs exported data (pupil_data_test) and the output are [timestep, gaze fixations label, pixel_x, pixel_y], for each detected gaze fixation.
Test controller App (iCubSIM). There are three modes: manual robot leader; gazedialogue robot leader; gazedialogue robot follower. manual robot leader does not need eye-tracker(PupilLabs) while gazedialogue modes require eye-tracker(PupilLabs) for it to work.
Open terminals:
yarpserver --write
yarpmanager
in yarpmanager do:
- open controller/apps/iCub_startup.xml
- open controller/apps/GazeDialogue_leader.xml
- run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
- connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-manual-leader
- connect all modules in GazeDialogue-Leader. Open terminal:
yarp rpc /service
- Write the following
>> help
this shows the available actions:
>> look_down
>> grasp_it
>> pass or place
Open terminals:
yarpserver --write
yarpmanager
in yarpmanager do:
- open controller/apps/iCub_startup.xml
- open controller/apps/GazeDialogue_leader.xml
- run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
- connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-main-leader
- connect all modules in GazeDialogue-Leader.
- Press Enter - robot will look down
- Press Enter - robot will find ball and grasp it (try to!)
- Press Enter - robot will run GazeDialogue system for leader (needs PupilLabs to function properly)
Open terminals:
yarpserver --write
yarpmanager
in yarpmanager do:
- open controller/apps/iCub_startup.xml
- open controller/apps/GazeDialogue_leader.xml
- run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
- connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-main-follower
- connect all modules in GazeDialogue-Leader.
- Press Enter - robot will run GazeDialogue system for follower (needs PupilLabs to function properly)
You need to change robot name in the file src/extras/configure.cpp
// Open cartesian solver for right and left arm
string robot="icub";
from "icubSim"
to "icub"
. Then recompile build.
- open YARP - yarpserver
- use yarpnamespace /icub (for more information check link)
- open Pupil-Labs (Capture App)
- open detection project
- run Pupil_Stream_to_Yarp to open LSL
- check /pupil_gaze_tracker is publishing gaze fixations
Run on the real robot - without right arm (optional). Firstly, start iCubStartup from the yarpmotorgui in the real iCub and run the following packages:
- yarprobotinterface --from yarprobotinterface_noSkinNoRight.ini
- iKinCartesianSolver -part left_arm
- iKinGazeCtrl
- wholeBodyDynamics icubbrain1 --headV2 --autocorrect --no_right_arm
- gravityCompensator icubbrain2 --headV2 --no_right_arm
- fingersTuner icub-laptop
- imuFilter pc104
.
├─── Controller
├── CMakeLists.txt
├── app
│ ├── GazeDialogue_follower.xml
| ├── GazeDialogue_leader.xml
| └── iCub_startup.xml
|
├── include
│ ├── compute.h
│ ├── configure.h
| ├── helpers.h
| └── init.h
└── src
├── icub_follower.cpp
├── icub_leader.cpp
└── extras
├── CvHMM.h
├── CvMC.h
├── compute.cpp
├── configure.cpp
├── detector.cpp
└── helpers.cpp
├─── Detection
├── main.py | main_offline.py
├── face_detector.py | face_detector_gpu.py
├── objt_tracking.py
├── gaze_behaviour.py
└── pupil_lsl_yarp.py
In case you have the detection App and/or the connectivity App in a different computer do not forget to point YARP to where iCub is running:
- yarp namespace /icub (in case /icub is the name of the yarp network)
- yarp detect (to check you are connected)
- gedit /home/user/.config/yarp/_icub.conf
- 'ip of computer you wish to connect' 10000 yarp
Read camera output
- yarpdev --device grabber --name /test/video --subdevice usbCamera --d /dev/video0
- yarp connect /test/video /icubSim/texture/screen
- To make it work on Ubuntu 16.04 with CUDA-11.2 and Tensorflow 2.7 you need to do the following:
- install nvidia driver 460.32.03 (cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb)
- wget https://developer.download.nvidia.com/compute/cuda/11.2.1/local_installers/cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb
- sudo dpkg -i cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb
- sudo apt-key add /var/cuda-repo-ubuntu1604-11-2-local/7fa2af80.pub
- sudo apt-get install cuda-11-2
- check that apt-get is not removing any packages
- install Cudnn 8.1 for CUDA-11.0, 11.1, and 11.2
- test using deviceQuery on cuda-11.0 samples/1_Utilities
- follow the guidelines of Building and Instructions
- if after installing tensorflow, the system complains about missing cudart.so.11.0 then do this: (you can add this to ~/.bashrc)
export PATH=$PATH:/usr/local/cuda-11.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2/lib64
- To make it work on tensorflow 2.7 I needed to alter the code in ~/software/tensorflow/models/research/object_detection/utils/label_map_utils.py (line 132)
with tf.io.gfile.GFile(path, 'r') as fid:
instead of
with tf.gfile.GFile(path, 'r') as fid:
If you find this code useful in your research, please consider citing our paper:
M. Raković, N. F. Duarte, J. Marques, A. Billard and J. Santos-Victor, "The Gaze Dialogue Model: Nonverbal Communication in HHI and HRI," in IEEE Transactions on Cybernetics, doi: 10.1109/TCYB.2022.3222077.
Nuno Ferreira Duarte
MIT © Nuno Duarte