Ioannis Kourouklides, Researcher and Data Scientist
aiD: aRTIFICIAL iNTELLIGENCE for the Deaf
Project Leader: Dr. Soterios Chatzis
Organization: Impact Tech LTD
‘aRTIFICIAL iNTELLIGENCE for the Deaf’ (aiD) project has been given funding of €1.7 million by the EU’s Horizon 2020 programme. The project brings together expertise from Cyprus University of Technology, Georgia Tech, Anontec Infosystems Ltd, Ethniko kai Kapodistriako Panepistimio Athinon, HandsUp Agency IKE, Modus SA, Hellenic Federation of the Deaf, University of Kent, Hostdrop OÜ and European Union of the Deaf.
ENERFUND: An ENErgy Retrofit FUNDing tool
Project Leader: Dr Alexandros Charalambides
Organization: Cyprus University of Technology
The Sustainable Energy Laboratory was awarded €1.5 million from the European Commissio Horizon 2020 Programme to lead the development of a tool that will rate and score deep renovation opportunities – like a credit score used by banks to rate clients. The tool will be based on a methodology to be developed and on a set of parameters such as EPC data, number of certified installers, governmental schemes running, etc. By providing a rating for deep renovation opportunities – whether for private establishments or for public buildings – funding institutes can provide targeted loans, retrofit companies can identify sound opportunities, municipalities can promote targeted incentives and the public’s trust for retrofitting will be enhanced.
The partners, from 12 countries, include 2 universities, in charge of the project management and the development of the methodology behind the tool, 2 SMEs with extensive experience on database management, EPC mapping, development of online decision-making tools, and 11 Ministries, Energy Agencies, NGOs, etc that are connected with the relevant stakeholders throughout Europe and can promote the tool.
GIFT-Surg: Guided Instrumentation for Fetal Therapy and Surgery
Project Leader: Prof Sebastien Ourselin
Organization: University College London (UCL)
Summary: UCL was awarded £10 million from the Wellcome Trust and the EPSRC to develop better tools and imaging techniques that will improve the success of surgery and other therapies on unborn babies (for conditions such as spina bifida or twin-to-twin-transfusion-syndrome, TTTS) in collaboration with KU Leuven, Great Ormond Street Hospital and NHS University College London Hospitals.
At present, such operations are impossible. The objective of the Wellcome-funded project is to develop instruments – based on the latest developments in Optics, Robotics and Medical Image Computing – that will make them possible.
Visuo-Spatial Perceptual Perspective Taking in Mobile Robotics (using Kinect)
Master's thesis - Final year project at Personal Robotics Laboratory, Intelligent Systems and Networks Group
Organization: Imperial College London
Supervisor: Dr Yiannis Demiris
Abstract: Robots that cooperate with humans need to be in a position to comprehend certain mental processes of the person that are interacting with, such as visual and spatial perception. In this thesis, a mechanism is proposed on how a robot can deduce the perspective of a single human. The whole system essentially imitates bio-inspired mechanisms of visual and spatial perception in mobile robots. In particular, the system is implemented using just low-quality depth data from a depth camera, such as Kinect. Beforehand, an offline 3D dense map of the environment is constructed from monocular cues of the aforementioned device using a CUDA-compliant GPU. During the human-robot interaction, firstly the focus of attention is inferred in real time using head pose estimation. The specific machine learning technique involves discriminative random forests that are pre-trained on a legacy dataset, called Biwi. At the same time, discrete relaxation labelling, which is a computer vision technique, is also performed as a verification step, since the specific machine learning method can sometimes misclassify wrong parts of the body as the human head. Secondly, an online 3D reconstruction of the face and the rest of the upper body takes place using OpenGL. Finally, the specific scene that the human is looking towards to is visualised using the environment map that was constructed offline. Results show that human perspective is correctly derived and that the system is suitable to be used in real-life robots.