Skip to content

NunoDuarte/GazeDialogue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gaze Dialogue Model

Python 3.5.5 Python 3.8 Tensorflow 1.9 Tensorflow 2.8.0 C++ Build Status GitHub license

Gaze Dialogue Model system for iCub Humanoid Robot

Table of Contents

Building

  1. clone repository
git clone git@github.com:NunoDuarte/GazeDialogue.git
  1. start with the controller App (have a look at Structure to understand the pipeline of GazeDialogue)
cd controller
  1. install dependencies for controller App in Dependencies
  2. build
mkdir build
ccmake .
make -j
  1. install the dependencies for detection App in Dependencies
  2. install the dependencies for connectivity App in Dependencies (optional only for real iCub)
  3. Jump to Setup for the first tests of the GazeDialogue pipeline

Dependencies

For controller App follow instructions in icub website:

  • YARP (tested on v2.3.72)
  • iCub (tested on v1.10)
$ git clone https://github.com/robotology/ycm.git -b v0.11.3
$ git clone https://github.com/robotology/yarp.git -b v3.4.0
$ git clone https://github.com/robotology/icub-main.git -b v1.17.0
  • OpenCV (tested on v3.4.1 and v3.4.17)
    • OpenCV can be with or without CUDA, but we do recommend to install OpenCV with CUDA (tested on CUDA-8.0, CUDA-11.2, and CUDA-11.4). Please follow the official OpenCV documentation.

For the detection App

Install the requirements. We recommend installing Anaconda virtual environment

pip3 install -r requirements.txt

utils package is from Tensorflow Object Detection API (follow the instructions to install it). Then add it to your path

cd tensorflow/models/research
export PYTHONPATH=$PYTHONPATH:$(pwd)/slim
echo $PYTHONPATH 
export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/object_detection 

pylsl needs liblsl. Either install in /usr/ or add the filepath specified by an environment variable named PYLSL_LIB

export PYLSL_LIB=/path/to/liblsl.so

For the connectivity App:

This is send the communication of PupilLabs to the detection App which then send to the iCub (YARP)

Setup

Test detection App (pupil_data_test)

  1. go to detection app
cd detection
  1. run detection system offline
python3 main_offline.py

You should see a window of a video output appear. The detection system is running on the PupilLabs exported data (pupil_data_test) and the output are [timestep, gaze fixations label, pixel_x, pixel_y], for each detected gaze fixation.

Manual mode:

Test controller App (iCubSIM). There are three modes: manual robot leader; gazedialogue robot leader; gazedialogue robot follower. manual robot leader does not need eye-tracker(PupilLabs) while gazedialogue modes require eye-tracker(PupilLabs) for it to work.

Open terminals:

yarpserver --write
yarpmanager

in yarpmanager do:

  1. open controller/apps/iCub_startup.xml
  2. open controller/apps/GazeDialogue_leader.xml
  3. run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
  1. connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-manual-leader
  1. connect all modules in GazeDialogue-Leader. Open terminal:
yarp rpc /service
  1. Write the following >> help this shows the available actions:
>> look_down
>> grasp_it
>> pass or place

GazeDialogue mode - Robot as a Leader:

Open terminals:

yarpserver --write
yarpmanager

in yarpmanager do:

  1. open controller/apps/iCub_startup.xml
  2. open controller/apps/GazeDialogue_leader.xml
  3. run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
  1. connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-main-leader
  1. connect all modules in GazeDialogue-Leader.
  2. Press Enter - robot will look down
  3. Press Enter - robot will find ball and grasp it (try to!)
  4. Press Enter - robot will run GazeDialogue system for leader (needs PupilLabs to function properly)

GazeDialogue mode - Robot as a Follower:

Open terminals:

yarpserver --write
yarpmanager

in yarpmanager do:

  1. open controller/apps/iCub_startup.xml
  2. open controller/apps/GazeDialogue_leader.xml
  3. run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
  1. connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-main-follower
  1. connect all modules in GazeDialogue-Leader.
  2. Press Enter - robot will run GazeDialogue system for follower (needs PupilLabs to function properly)

Run in real robot (iCub)

You need to change robot name in the file src/extras/configure.cpp

        // Open cartesian solver for right and left arm
        string robot="icub";

from "icubSim" to "icub". Then recompile build.

Robot as a Follower:

  1. open YARP - yarpserver
  2. use yarpnamespace /icub (for more information check link)
  3. open Pupil-Labs (Capture App)
  4. open detection project
  5. run Pupil_Stream_to_Yarp to open LSL
  6. check /pupil_gaze_tracker is publishing gaze fixations

Run on the real robot - without right arm (optional). Firstly, start iCubStartup from the yarpmotorgui in the real iCub and run the following packages:

  • yarprobotinterface --from yarprobotinterface_noSkinNoRight.ini
  • iKinCartesianSolver -part left_arm
  • iKinGazeCtrl
  • wholeBodyDynamics icubbrain1 --headV2 --autocorrect --no_right_arm
  • gravityCompensator icubbrain2 --headV2 --no_right_arm
  • fingersTuner icub-laptop
  • imuFilter pc104

Structure

.
├─── Controller
	├── CMakeLists.txt
	├── app
	│   ├── GazeDialogue_follower.xml
	|   ├── GazeDialogue_leader.xml
	|   └── iCub_startup.xml
	|   
	├── include
	│   ├── compute.h
	│   ├── configure.h
	|   ├── helpers.h
	|   └── init.h
	└── src
	    ├── icub_follower.cpp
	    ├── icub_leader.cpp
	    └── extras
		├── CvHMM.h
		├── CvMC.h
		├── compute.cpp
		├── configure.cpp
		├── detector.cpp
		└── helpers.cpp
├─── Detection
	├── main.py | main_offline.py
	├── face_detector.py | face_detector_gpu.py
	├── objt_tracking.py
	├── gaze_behaviour.py
	└── pupil_lsl_yarp.py

Instructions for a dual-computer system

In case you have the detection App and/or the connectivity App in a different computer do not forget to point YARP to where iCub is running:

  • yarp namespace /icub (in case /icub is the name of the yarp network)
  • yarp detect (to check you are connected)
  • gedit /home/user/.config/yarp/_icub.conf
  • 'ip of computer you wish to connect' 10000 yarp

Extras

Read camera output

  • yarpdev --device grabber --name /test/video --subdevice usbCamera --d /dev/video0
  • yarp connect /test/video /icubSim/texture/screen

Issues

  • To make it work on Ubuntu 16.04 with CUDA-11.2 and Tensorflow 2.7 you need to do the following:
  1. install nvidia driver 460.32.03 (cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb)
  2. wget https://developer.download.nvidia.com/compute/cuda/11.2.1/local_installers/cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb
  3. sudo dpkg -i cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb
  4. sudo apt-key add /var/cuda-repo-ubuntu1604-11-2-local/7fa2af80.pub
  5. sudo apt-get install cuda-11-2
  6. check that apt-get is not removing any packages
  7. install Cudnn 8.1 for CUDA-11.0, 11.1, and 11.2
  8. test using deviceQuery on cuda-11.0 samples/1_Utilities
  9. follow the guidelines of Building and Instructions
  10. if after installing tensorflow, the system complains about missing cudart.so.11.0 then do this: (you can add this to ~/.bashrc)
export PATH=$PATH:/usr/local/cuda-11.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2/lib64
  • To make it work on tensorflow 2.7 I needed to alter the code in ~/software/tensorflow/models/research/object_detection/utils/label_map_utils.py (line 132)
with tf.io.gfile.GFile(path, 'r') as fid:

instead of

with tf.gfile.GFile(path, 'r') as fid:

Citation

If you find this code useful in your research, please consider citing our paper:

M. Raković, N. F. Duarte, J. Marques, A. Billard and J. Santos-Victor, "The Gaze Dialogue Model: Nonverbal Communication in HHI and HRI," in IEEE Transactions on Cybernetics, doi: 10.1109/TCYB.2022.3222077.

Contributing

Nuno Ferreira Duarte

GitHub Badge Website Badge Google Badge

License

MIT © Nuno Duarte