Skip to content
forked from lukas/robot

Branch from Lukas' robot project to build a smart home assistant

Notifications You must be signed in to change notification settings

fmacrae/Roland_Robot

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Roland_Robot

This will run a simple robot with a webserver on a raspberry PI with the Adafruit Motor Hat. This is a development off of Lukas' great work on his robot. Core things we are adding is mapping capability, notification APIs and imporved self driving.

Hardware

To get started, you should be able to make the robot work without the arm, whiskers, sonar and servo hat.

Programs

  • robot.py program will run commands from the commandline
  • sonar.py tests sonar wired into GPIO ports
  • wheels.py tests simple DC motor wheels
  • arm.py tests a servo controlled robot arm
  • autonomous.py implements a simple driving algorithm using the wheels and sonal
  • inception_server.py runs an image classifying microservice
  • Notification_Test.py tests the Twitter and Gmail integration.

Example Robots

Here is the robot I made that uses this software

Robots

Wiring The Robot

Sonar

If you want to use the default sonar configuation, wire like this:

  • Left sonar trigger GPIO pin 23 echo 24
  • Center sonar trigger GPIO pin 17 echo 18
  • Right sonar trigger GPIO pin 22 echo 27
  • Right whisker GPIO pin 21
  • Left whisker GPIO pin 20

You can modify the pins by making a robot.conf file.

Wheels

You can easily change this but this is what wheels.py expects

  • M1 - Front Left
  • M2 - Back Left (optional - leave unwired for 2wd chassis)
  • M3 - Back Right (optional - leave unwired for 2wd chassis)
  • M4 - Front Right

Installation

basic setup

There are a ton of articles on how to do basic setup of a Raspberry PI - one good one is here https://www.howtoforge.com/tutorial/howto-install-raspbian-on-raspberry-pi/ Boot it with a screen, keyboard and mouse, connect to your wifi and note your IP address. Best to log into your hub and assign it a static IP address so you know what to ssh to.

You will need to turn on i2c and optionally the camera

sudo raspi-config
  1. Change your password as you need one for SSH
  2. Change your hostname to something meaningful like Robot or Roland
  3. Interfacing options, enable: P1 Camera, P2 SSH, P5 I2C
  4. Update - always update :) Finish

Now upgrade your Pi - takes a while....then do a restart

sudo apt-get update
sudo apt-get upgrade
pip install --upgrade pip
sudo shutdown now

Now you can unplug and SSH into it from a real computer.

Next you will need to download i2c tools and smbus

sudo apt-get install i2c-tools python-smbus python3-smbus

Test that your hat is attached and visible with

i2cdetect -y 1

Install this code

sudo apt-get install git
git clone https://github.com/fmacrae/Roland_Robot.git
cd Roland_Robot

Install dependencies = adafruit pip install fails at moment so install manually

cd ~
git clone https://github.com/adafruit/Adafruit-Motor-HAT-Python-Library.git
cd Adafruit-Motor-HAT-Python-Library
sudo apt-get install python-dev
sudo python setup.py install
cd ~/Roland_Robot
sudo easy_install pykalman
sudo pip install -r requirements.txt
sudo apt-get install flite
sudo apt-get install python-paramiko

At this point you should be able to drive your robot locally, try:

./robot.py forward

server

To run a webserver in the background with a camera you need to setup gunicorn and nginx

nginx

Nginx is a lightway fast reverse proxy - we store the camera image in RAM and serve it up directly. This was the only way I was able to get any kind of decent fps from the raspberry pi camera. We also need to proxy to gunicorn so that the user can control the robot from a webpage.

copy the configuration file from nginx/nginx.conf to /etc/nginx/nginx.conf

sudo apt-get install nginx
sudo cp nginx/nginx.conf /etc/nginx/nginx.conf

restart nginx

sudo nginx -s reload

gunicorn

install gunicorn and the web services if you want to control the robot manually

sudo pip install gunicorn

copy configuration file from services/web.service /etc/systemd/system/web.service

sudo cp services/web.service /etc/systemd/system/web.service

start gunicorn web app service

sudo systemctl daemon-reload
sudo systemctl enable web
sudo systemctl start web

Your webservice should be started now.

camera

In order to stream from the camera you can use RPi-cam. It's documented at http://elinux.org/RPi-Cam-Web-Interface but you can also just run the following

git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git
cd RPi_Cam_Web_Interface
chmod u+x *.sh
./install.sh

Popup giving you choices will appear, accept defaults. You'll get a few errors like:

Failed to start The Apache HTTP Server.

But just ignore them. Now a stream of images from the camera should be constantly updating the file at /dev/shm/mjpeg. Nginx will serve up the image directly if you request localhost/cam.jpg. Be aware that SD cards only have a limited write capacity so if you leave this running 24 7 then over a few months you will burn out a consumer level card. Make sure you clone your card at least every other month as it will either lock in read only state or start to have write errors. It is a good practice to get into. Either plug the Pi into a monitor and use the SD Card clone function or put the sd card into your main machine and use disks to take an image of it. Another warning, use the same SD Card type as there is a minor variation in total size between the 16Gb manufacturers.

tensorflow

pip install tensorflow

Last command took an age... run it and go out or run it just before bed so it can go overnight

pull tensorflow to get the examples etc

cd ~
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow

Now create a symbolic link for the labels in your tensorflow directory to the pi_examples label_image directory

pi@raspberrypi:~/tensorflow $ ln -s tensorflow/contrib/pi_examples/label_image/gen/bin/label_image label_image

Now run this and your robot should start to explore and log its environment.

sh startRobot.sh

Then on your pi:

notification

  • Update Notification_Settings.csv with your Twitter API OAuth settings, Siraj has a good guide on how to set it up here https://www.youtube.com/watch?v=o_OZdbCzHUA
  • Also create a Gmail API OAuth token called client_secret.json using instructions here https://developers.google.com/gmail/api/quickstart/python
  • Run the Notification_Test.py which will hopefully Tweet then ask for your permission via a browser to send email.
  • If you cannot do this due to running via SSH or similar then install the dependancies and run the Notification_Test.py on your desktop which creates a special json file in your home directory in a hidden subfolder called .credentials
  • sftp the file to your Pi
sftp pi@yourpisaddress
lcd ~/.credentials
cd /home/pi/.credentials
put gmail-python-email-send.json

About

Branch from Lukas' robot project to build a smart home assistant

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 85.6%
  • C++ 7.2%
  • JavaScript 4.1%
  • HTML 2.2%
  • Other 0.9%