Gaze Dialogue Model system for iCub Humanoid Robot
Follow instructions in icub website:
- YCM
- YARP
- icub-main
- OpenCV (optional)
git clone https://github.com/robotology/ycm.git -b v0.11.3
git clone https://github.com/robotology/yarp.git -b v2.3.72
git clone https://github.com/robotology/icub-main.git -b v1.10.0
git clone https://github.com/robotology/icub-contrib-common -b 7d9b7e4git clone https://github.com/robotology/ycm.git -b v0.11.3
git clone https://github.com/robotology/yarp.git -b v3.4.0
git clone https://github.com/robotology/icub-main.git -b v1.17.0Recommended with CUDA (tested on CUDA-8.0, CUDA-11.2, and CUDA-11.4). Please follow the official OpenCV documentation.
Install the requirements. We recommend installing in a virtual environment like Anaconda
pip3 install -r requirements.txtFor our gaze fixations we use Tensorflow models
git clone https://github.com/tensorflow/models.gitutils package is from Tensorflow Object Detection API (follow the instructions to install it). Then add it to your path
cd models/research
export PYTHONPATH=$PYTHONPATH:$(pwd)/slim
echo $PYTHONPATH
export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/object_detectionpylsl needs liblsl (v1.13.0). Either install in /usr/ or add the filepath specified by an environment variable named PYLSL_LIB
cd liblsl && mkdir build && cmake ..
export PYLSL_LIB=/path/to/liblsl.soYou can test if the detection system is working by running python main_offline.py
This is to send the communication of PupilLabs to the detection App which then sends it to the iCub (through YARP)
Either install the PupilLabs Capture app or from source. We use LabStreamingLayer(LSL) to stream the data and convert to YARP. Alternatively to LabStreamingLayer is ROS (not yet tested)
- ROS
- PupilLabs ROS plugin
- Clone repository
git clone git@github.com:NunoDuarte/GazeDialogue.git
- Start with the controller App
cd controller - Install controller App Dependencies
- Build
mkdir build ccmake . make -j - Install detection App Dependencies
- Install connectivity App Dependencies (optional when using iCub)
- Jump to Setup for the first tests of the GazeDialogue pipeline
Test detection App (pupil_data_test)
- Go to detection app
cd detection - Run detection system offline
You should see a window of a video output appear. The detection system is running on the PupilLabs exported data (pupil_data_test) and the output are
python3 main_offline.py
[timestep, gaze fixations label, pixel_x, pixel_y], for each detected gaze fixation.
Test controller App (iCubSIM). There are three modes: manual robot leader; gazedialogue robot leader; gazedialogue robot follower. manual robot leader does not need eye-tracker(PupilLabs) while gazedialogue modes require eye-tracker(PupilLabs) for it to work.
Open terminals:
yarpserver --write
yarpmanagerIn yarpmanager do:
- Open
controller/apps/iCub_startup.xml - Open
controller/apps/GazeDialogue_leader.xml - Run all modules in iCub_startup
You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build ./gazePupil-detector - Connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-manual-leader
- Connect all modules in GazeDialogue-Leader. Open terminal:
yarp rpc /service
- Write the following
>> helpthis shows the available actions:>> look_down >> grasp_it >> pass or place
Open terminals:
yarpserver --write
yarpmanagerIn yarpmanager do:
-
Open
controller/apps/iCub_startup.xml -
Open
controller/apps/GazeDialogue_leader.xml -
Run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build ./gazePupil-detector -
Connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
-
Turn PupilLabs Capture on
-
Make sure the streaming plugin is on
-
Open a new terminal and open the detection app
python3 main.py
You should see a window open of the eye-tracker output. It should highlight the objects, faces, and gaze.
-
Run
Pupil_Stream_to_Yarp(pl1_yarp) to convert the message to YARP !!!! (this should be improved)
Now, depending on whether you want to interact with the iCub or iCubSIM as a Leader or Follower the instructions change slightly
Open a new terminal to run main process for leader
./gazePupil-main-leaderConnect the GazeDialogue-Leader yarp port that receives the the gaze fixations.
Press Enter - robot will run GazeDialogue system for leader
Open a new terminal to run main process for follower
./gazePupil-main-followerConnect the GazeDialogue-Follower yarp port that receives the the gaze fixations.
Press Enter - robot will run GazeDialogue system for follower
You need to change robot name in the file src/extras/configure.cpp
// Open cartesian solver for right and left arm
string robot="icub";from "icubSim" to "icub". Then recompile build.
- Open YARP -
yarpserver - Use
yarpnamespace /icub(for more information check link) - Open Pupil-Labs (Capture App)
- Open detection project
- Run Pupil_Stream_to_Yarp to open LSL
- Check
/pupil_gaze_trackeris publishing gaze fixations
Run on the real robot - without right arm (optional). Firstly, start iCubStartup from the yarpmotorgui in the real iCub and run the following packages:
yarprobotinterface --from yarprobotinterface_noSkinNoRight.iniiKinCartesianSolver -part left_armiKinGazeCtrlwholeBodyDynamics icubbrain1 --headV2 --autocorrect --no_right_armgravityCompensator icubbrain2 --headV2 --no_right_armfingersTuner icub-laptopimuFilter pc104
.
ββββ Controller
βββ CMakeLists.txt
βββ app
βΒ Β βββ GazeDialogue_follower.xml
| βββ GazeDialogue_leader.xml
| βββ iCub_startup.xml
|
βββ include
βΒ Β βββ compute.h
βΒ Β βββ configure.h
| βββ helpers.h
| βββ init.h
βββ src
βββ icub_follower.cpp
βββ icub_leader.cpp
βββ extras
βββ CvHMM.h
βββ CvMC.h
βββ compute.cpp
βββ configure.cpp
βββ detector.cpp
βββ helpers.cpp
ββββ Detection
βββ main.py | main_offline.py
βββ face_detector.py | face_detector_gpu.py
βββ objt_tracking.py
βββ gaze_behaviour.py
βββ pupil_lsl_yarp.py
In case you have the detection App and/or the connectivity App in a different computer do not forget to point YARP to where iCub is running:
yarp namespace /icub(in case/icubis the name of the yarp network)yarp detect(to check you are connected)gedit /home/user/.config/yarp/_icub.conf'ip of computer you wish to connect' 10000 yarp
Read camera output
yarpdev --device grabber --name /test/video --subdevice usbCamera --d /dev/video0yarp connect /test/video /icubSim/texture/screen
- To make it work on Ubuntu 16.04 with CUDA-11.2 and Tensorflow 2.7 you need to do the following:
- Install nvidia driver 460.32.03 (
cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb) wget https://developer.download.nvidia.com/compute/cuda/11.2.1/local_installers/cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.debsudo dpkg -i cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.debsudo apt-key add /var/cuda-repo-ubuntu1604-11-2-local/7fa2af80.pubsudo apt-get install cuda-11-2- Check that
apt-getis not removing any packages - Install Cudnn 8.1 for CUDA-11.0, 11.1, and 11.2
- Test using
deviceQueryoncuda-11.0 samples/1_Utilities - Follow the guidelines of Building and Instructions
- If after installing tensorflow, the system complains about missing
cudart.so.11.0then do this: (you can add this to~/.bashrc)export PATH=$PATH:/usr/local/cuda-11.2/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2/lib64
- Install nvidia driver 460.32.03 (
- To make it work on tensorflow 2.7 I needed to alter the code in
~/software/tensorflow/models/research/object_detection/utils/label_map_utils.py(line 132)instead ofwith tf.io.gfile.GFile(path, 'r') as fid:
with tf.gfile.GFile(path, 'r') as fid:
If you find this code useful in your research, please consider citing our paper:
M. RakoviΔ, N. F. Duarte, J. Marques, A. Billard and J. Santos-Victor, "The Gaze Dialogue Model: Nonverbal Communication in HHI and HRI," in IEEE Transactions on Cybernetics, doi: 10.1109/TCYB.2022.3222077.

