Skip to content

Autonomous Self-Driving Car Prototype - with automatic steering control, traffic sign recognition, traffic light detection and other object detection features.

Notifications You must be signed in to change notification settings

sgagankumar/TheGhostCar-AutonomousCarProject

Repository files navigation

The GHOST CAR - An Autonomous Car Project

Academic Final Year Engineering Project

  • The Aim of this project was to develop a Autonomous Car Prototype, which had automatic steering control, traffic sign recognition, traffic light detection and other object detection features.
  • The Project runs on a Model Car which uses a Raspberry Pi 4B+, assisted by 1-3 external computing hardware based on the GPU memory capacity. The Model Car collects input from a camera module, an ultrasonic sensor, sends data to a external computer over IP. The computer processes the input data for movement controls, object detection (traffic sign and traffic light) and collision avoidance.
  • All these features are achieved using latest technologies such as, Machine Learning Algorithms, Artificial Neural Networks, Sensor Fusion and Computer Vision.

THE GHOST CAR - PROTOTYPE

Ghost Car Images

About the Project Files Structure

~/Driving Prediction/

  The following Programs are run on an external system with good GPU computation capacity.

    angle.txt: File buffer for TCP data transfer.

    modelall.h5: Trained model file to control the Steering for the Prototype Car.

    modelleft.h5: Trained model file to control the Steering for the Prototype Car - only left turns.

    modelright.h5: Trained model file to control the Steering for the Prototype Car - only right turns.

    run.bat: Batch File to manage multiple windows.

    uploadTCP.py: Establishes TCP connection to the prototype car.

    videoPredict.py: Program that predicts steering angle using trained Convolution Neural Networks.

    videoPredict.py: Program that predicts steering angle using trained Convolution Neural Networks.

~/Lane Detection/

    lanesImage.py: Detects Lanes on Road - for Image Input.

    lanesTuner.py: Helps in tuning lanesImage.py

    lanesVideo.py: Detects Lanes on Road - for Video Input.

    test_images.jpg: Samples Images to test lanesImage.py

    test_video.mp4: Samples Videos to test lanesVideo.py

~/Object Detection/

    coco.names: COCO is a large-scale object detection, segmentation, and captioning dataset.
        to know more you can visit Click Me .

    OD.py: Object Detection Program - for Video Input.

    ODimage.py: Object Detection Program - for Image Input.

    test.mp4: Sample Video for Testing Object Detection.

    yolov3(380).cfg: Yolov3 config files.

    yolov3(608).cfg: Yolov3 config files - Standard.

    yolov3-tiny.cfg: Yolov3 tiny config files - Light weight.

    yolov3.weights: Yolov3 tiny neural networks weights files - Standard.

    yolov3-tiny.weights: Yolov3 tiny neural networks weights files - Light weight.

~/RPi Programs/

  The following programs are run on the Raspberri Pi mounted on the Car Prototype.

    GhostCarDrive.py: Program that controls the Prototype Car's components for the autonomous driving.

    IPStreaming.py: Program that streams car's camera input to external computer over IP.

    SampleControlServoMotor.py: Sample program to test Servo Motors manually.

    SampleTestMotor.py: Sample program to test DC motor through L298N driver.

~/Sample TCP Connection/

    client.py: Sample TCP Client Program - Runs on RPi to establish connection with External GPU.

    server.py: Sample TCP Server Program - Runs on laptop to send back data to RPi.

~/Simulation Testing Driver/

    drive relay.py: Program that connects with UDACITY Autonomous Driving Simulator.

    drive.py: Program to test trained models on the simulator.

    model speed.h5: Trained Model to control the Car's Speed.

    model steering.h5: Trained Model to control the Car's Steering.

    model throttle.h5: Trained Model to control the Car's Throttle.

~/Trafic Light Detection/

    TrafficLight.py: Program that recognises Traffic Lights.

    Sample_Video.mp4: Sample video to test Traffic light Recognition.

    Sample_Output.mp4: Sample video output for the Traffic light Recognition.

~/Traffic Sign Detection/

    Traffic Signs Detection.py: Program to Predcit Traffic signs - for Video Input.

    trafficmodel.h5: CNN model for traffic sign prediction.

~/Udacity Simulator/

    Udacity Simulator - Windows64 - Installer.zip: Windows Installer for Open Source Simulation Software
        developed by UDACITY using Unity Engine for Autonomous Car Simulation.
        For the Actual Project Refer the following link. https://github.com/udacity/self-driving-car-sim

~/Video Recording/

    videoCapture.py: Program to record video from a camera or mp4 or mjpeg stream.

    videoFlipper.py: Program that flip the video horizontally or vertically to alter different positions of camera.

    videoFramer.py: Program that saves each frame in a Separate folder for a Video Input also with a config file
        with all the frame files names, usually used for dataset generation.

Setting up the environment

On Raspberry Pi

The hardware used in this project on the prototype was a Raspberry Pi 4 B+ 4GB RAM Model.

  1. Install Python 3.6.0+
  2. Copy the 'RPi Programs' folder onto the Raspberri Pi directory.
  3. Run the following commmand on your terminal pip3 install -r Requirements.txt

On External System

The external hardware used in this project consists of two laptops. Each running Steering Control and Object Detection Respectively

  1. Install Python 3.6.0+
  2. Copy all the project file system.
  3. Run the following commmand on your terminal pip install -r Requirements.txt

How to drive

  1. Testing: Run the Sample programs under each sub-folder to test the functioning for all the features.

  2. Training (optional): Run the training programs along with relevant dataset to customize the neural network models as per requirements

  3. Actuation: Test all the hardware components and use the Pin Configuration table to connect all the components on the model prototype.

  4. Camera Input over IP Stream: Run python3 IPStreaming.py on RPi4 and check for output on web browser at www.[YourRPiIP]:[PortNumber]/index.html NOTE:"Make sure to use a dedicated RaspberriPi Camera Module"

  5. Steering Control Prediction: Run python videoPredict.py on External System to start steering control prediction using CNN.

  6. Establish TCP Communication: Create a TCP Server connection to send back valuable data to the prototype model. Run python uploadTCP.py to start a server to send data upon request from the RPi.

  7. Self-driving of prototype: Run python3 GhostCarDrive.py on RPi4, wait for the program to perform GPIO pin setup and establish a connection with the TCP server. Once the above actions are performed the car with start self-driving.
    NOTE: To terminate the Program - Press Ctrl + C only once!, pressing multiple times with forcefully terminate the program causing GPIO pins to misbehave due improper program termination.

PROTOTYPE MODEL -

IMAGES

MODEL TRACK

Ghost Car Images Ghost Car Images
Ghost Car Images Ghost Car Images

MODEL CAR

Ghost Car Images Ghost Car Images
Ghost Car Images Ghost Car Images
Ghost Car Images Ghost Car Images
Ghost Car Images Ghost Car Images
Ghost Car Images Ghost Car Images

Ghost Car Images

VIDEOS

Ghost Car Images

Ghost Car Images