Clone this wiki locally
Welcome to the wiki-based documentation of ARAS.
Formerly, this project was split in 2 main sections:
- Human-Robot-Interaction (HRI) , by me Frédéric Delaunay
- Cognitive acquisition of concepts (source), by Joachim de Greeff
HRI’s challenge is to bring natural interaction to robots, that is: giving robots the ability to understand and express themselves like humans do. So far, the focus has been on non-verbal communication (facial expression, emotionally driven behavior…) .
In the CONCEPT project, the robotic setup was essentially a novel robotic head (we baptised Rear-projected Animated Face) mounted on a Katana 400M robotic arm, although the software is not specific to a particular platform. Several interacting python modules allow scalability and dynamic programming of the robot.
- Communication Layer : allows python servers to communicate with others using a human-readable protocol.
- dedicated channels are created and configured from the contents of a configuration file
- transparent address family supported (socket,named-pipe) from configuration
- servers are aggregated in a meta server (to have a single process) introducing a new protocol keyword: origin [subserver_name]
- support for threading.
- Facial Animation System : enhancing FACS FACS improvements for realtime facial animation.
- protocol: triplets of AU weights, normalized target value, and attack-time
- blender backend: runtime detection of character abilities,
- allows backends for mechatronic faces.
- Vision module : provides access to cameras and higher-level information.
- face detection using pyvision
- other detection algorithms (edge, shape, color)
- Global Feature Pool: allows external systems to inspect robot state.
- All sensory (and proprioceptive) data available
- remote query over the network.
- Behaviour: a enhanced finite state machine for easy definition of robot behaviour
- allows multiple state machines to work together, sharing states.
- Body Management and Hardware Design :
- Design complete, Laser cutting and rapid prototyping
- Spine module in charge of spine+neck movement
- backend available for Katana 450 robotic arm.
- affective motor dynamics: emotional influence on motor movements and behaviours.
Work In Progress:
- visual motoring: works but would need a friendlier interface.
- facial emotion recognition, gaze detection, eye tracking (if possible).
- laser sensor: support for the Hoyuko URG 04LX-UG01
- face recognition: to be fully integrated
- audio source localization: basic code available, to be fully integrated
- Audio Processing : word recognition (subset of english words), speaker recognition (if possible)
- Speech Synthesis : PSOLA or similar base, realtime pitch modulation according to emotional dimensions,