See self-driving in action
视频演示
项目博客说明
项目博客中文

This project builds a self-driving RC car using Raspberry Pi, Arduino and open source software. Raspberry Pi collects inputs from a camera module and an ultrasonic sensor, and sends data to a computer wirelessly. The computer processes input images and sensor data for object detection (stop sign and traffic light) and collision avoidance respectively. A neural network model runs on computer and makes predictions for steering based on input images. Predictions are then sent to the Arduino for RC car control.
- 
Install
minicondaon your computer - 
Create
auto-rccarenvironment with all necessary libraries for this project
conda env create -f environment.yml - 
Activate
auto-rccarenvironment
source activate auto-rccar 
To exit, simply close the terminal window. More info about managing Anaconda environment, please see here.
test/
    rc_control_test.py: RC car control with keyboard
     stream_server_test.py: video streaming from Pi to computer
     ultrasonic_server_test.py: sensor data streaming from Pi to computer
     model_train_test/
         data_test.npz: sample data
         train_predict_test.ipynb: a jupyter notebook that goes through neural network model in OpenCV3
raspberryPi/
     stream_client.py:        stream video frames in jpeg format to the host computer
     ultrasonic_client.py:    send distance data measured by sensor to the host computer
arduino/
     rc_keyboard_control.ino: control RC car controller
computer/
     cascade_xml/
          trained cascade classifiers
     chess_board/
          images for calibration, captured by pi camera
     picam_calibration.py:     pi camera calibration
     collect_training_data.py: collect images in grayscale, data saved as *.npz
     model.py:                 neural network model
     model_training.py:        model training and validation
     rc_driver_helper.py:      helper classes/functions for rc_driver.py
     rc_driver.py:             receive data from raspberry pi and drive the RC car based on model prediction
Traffic_signal
     trafic signal sketch contributed by @geek111
- 
Testing: Flash
rc_keyboard_control.inoto Arduino and runrc_control_test.pyto drive the RC car with keyboard. Runstream_server_test.pyon computer and then runstream_client.pyon raspberry pi to test video streaming. Similarly,ultrasonic_server_test.pyandultrasonic_client.pycan be used for sensor data streaming testing. - 
Pi Camera calibration (optional): Take multiple chess board images using pi camera module at various angles and put them into
chess_boardfolder, runpicam_calibration.pyand returned parameters from the camera matrix will be used inrc_driver.py. - 
Collect training/validation data: First run
collect_training_data.pyand then runstream_client.pyon raspberry pi. Press arrow keys to drive the RC car, pressqto exit. Frames are saved only when there is a key press action. Once exit, data will be saved into newly createdtraining_datafolder. - 
Neural network training: Run
model_training.pyto train a neural network model. Please feel free to tune the model architecture/parameters to achieve a better result. After training, model will be saved into newly createdsaved_modelfolder. - 
Cascade classifiers training (optional): Trained stop sign and traffic light classifiers are included in the
cascade_xmlfolder, if you are interested in training your own classifiers, please refer to OpenCV doc and this great tutorial. - 
Self-driving in action: First run
rc_driver.pyto start the server on the computer, and then runstream_client.pyandultrasonic_client.pyon raspberry pi. 
中文文档 (感谢zhaoying9105)