Skip to content

MichaelNasello/Autonomous-Robot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autonomous Mini Cart

The Mini Cart is designed to follow directions without pre-programmed instructions. The end user can simply point in the desired direction of travel and the robot will react accordingly. Computer vision and Machine Learning are used to determine intended commands from instructor.

Raspberry Pi


The Mini Cart is powered by a Raspberry Pi; the Pi is responsible for all ML computations, capturing pictures via the Pi camera, and reacting by powering servo motors with GPIO (General Purpose Input Output) pins. A Python virtualenv was used to install and manage essential Python packages, including Tensorflow, Numpy, Pillow, RPi.GPIO and Picamera.

Chassis


The Mini Cart chassis was purchased online. It holds the Pi, Pi-Camera, servo motors, wheels, two power banks, and a breadboard with a circuit supporting drive functionality.

A custom camera mount was formed and installed on the front of the mini cart.

The circuit was designed, using a L293DNE driver motor chip to provide the capability for forwards and backwards movement, at various speeds. I used this tutorial for assistance.

Machine Learning


A simple CNN was implemented to generate predictions. One issue specific to this project was model size. It was impossible to load the original model (~400MB) into memory on the Raspberry Pi; this is a common problem for ML 'at the edge'. Several steps were taken to reduce the model size to ~2MB. The model was quantized to use float16 precision (from float32). The model was further compressed using Tensorflow Lite.

In addition, images from the dataset were resized to 256 by 256 images (from 512 by 512). This decreased the size of convolution layer outputs and number of parameters in dense layers later in the network.

I created my own dataset, gathering approximately 400 images of myself pointing in various directions. There was a 80 / 20 percent split of training and validation images, respectively. On the validation set of 80 images, an eventual accuracy of 88% was achieved.

Below you can observe the progress of training...

Validation Loss across Epochs:


Validation Accuracy across Epochs:

Final Product


For video demo, visit this link.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages