Skip to content

rodrigocava/mrrobot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Mr. Robot

A Self-Driving Remote Control Car controlled by a browser. All inside a Raspberry Pi

Hardware

Harware Architecture

The remote control car has 2 DC motors, and I used the L298N module to control them. You can see how to do that here. Both the motors and the L298N module are powered by 4AA batteries. The Raspberry Pi is powered by an external battery and we have also a Pi Camera with it. Finally, There are a few jump wires connecting the GPIO from the PI to the L298N module.

Software

Software Architecture

The main program buids an Self Driving Car object and starts a WebServer for the remote and camera stream. The car is made of a few parts:

  • SelfDrivinCar:
    • Handles interaction with the motors for movement and speed
  • CarCamera:
    • Handles camera stream and object detection for the stop sign
  • WebServer:
    • Creates a webserver to stream the camera and control the car
  • TrainData:
    • Handles gathering and saving train data
  • CarBrain:
    • Handles the load of the pre-build Machine Learning model and self-driving based on the camera images

Challenges

Considering that I wanted to run every thing inside the raspberry pi (sever, object detection, tunr prediction, etc) a big concern was always spped and performance. If the turn prediction took more than 500ms the car would miss its turn and miss the road. Although there maybe even more optimizations, the current program makes a prediction+turn in 300ms, which is more than enough for the success of the project.

Model trainning

To generate data, the train mode saved a frame in a rgb array with the correct command issued by me (forward, left or right). After I gathered a few hundreds of pictures + turn I loaded everything into my computer and used a script to generate even more train data by flipping the images (and adjusting the command accordingly) and also changing the image brightness or colors.

With all that, I trained several variations of a Deep Learning model using Keras+Tensorflow based on this Nvidia article and explained in this video. The model has an image normalization to avoid saturation and make gradients work better, 5 layers of a CNN to handle feature engineering, drop out layer to avoid overfitting, and finally 5 fully connected layers for predicting the turn.

Self-driving

The pre-trained model is loaded on the start of the program. Once the command from any browser comes in, a loop starts the self-driving: the process gathers the most recent img, makes the prediction and turns the car.

Object detection

The stop sign detection was made using Haar feature-based cascade classifiers. OpenCV have a very comprehensive functions and you can train your own model following tutorials like this. This approach is good considering it's very fast and we need speed to detect, classify and take action for every single frame. I used a pre-trained classifier from here and it worked very well for my stop sign.

About

A Self Driving Remote Control Car

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published