Skip to content
A very simple talking robot head code for rasbperry pi
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
controllers
socketing
util
LICENSE
PFTD.robotics.pi.pyproj
PFTD.robotics.pi.pyproj.user
PFTD.robotics.pi.pyproj.vspscc
README.md
Setup.txt
start_head.py
start_speech.py

README.md

simple_talking_robot

A very simple talking robot head code for rasbperry pi

  • python2.7.*

  • This project currently implements a basic robot head which is desinged using raspberry pi boards.

  • It handles pre-defined conversation (without any nlp), face recognition and head pan/tilt.

  • The robot starts a conversation and simultaneously, starts moving the head around to find familiar faces. If one found, it tries to follow the face as they move around. It also starts using the name of the found face while still in sight.

  • In order to get it familiarized with your face, train opencv face recognition and export it. Then you could load the XML by uncommenting and editing the line (#recognizer.load(os.path.join(current_dir, 'trained_model.xml'))) in camera.py

The hardware

  • My hardware setup for this code is a pair of Raspberry-pi 3 boards:
    • Pi1: Handles the neck and sight control. It is connected to a pi-camera. There is a servo-pi board connected to the board which controls two servo motors connected through a pan/tilt bracket.
    • Pi2: Handles the speaking. It is connected to a speaker, and through a USB soundcard to a microhpne. (I connected the 2 pi boards through a LAN cable, you could use 1, I extended to a second board to allocated a full board to image recognition.)

To start up

  • Check Setup.txt to get started.
  • Once environment is setup, start_head.py and start_speech.py shall be executed separately to start the robot. Once they manage to hand-shake, they start communicating through the socket.

Speech engine operates better connected to the internet, but it continues working if internet connection is lost:

  • Speech to Text is implemented using speech_recognition package which uses Google API if the device is connected to the internet, and the backup is sphinx if the device had no internet connection.
  • Text to Speech is implemented using Amazon IVONA API, and the backup is pyttsx if the device has no internet connection.
You can’t perform that action at this time.