Note: This AI is trained on the highway, but still performs relatively well in the city.
The goal of this project is to build a self-driving car with deep learning and computer vision, which can navigate in different environments. The project is inspired by the work done by Sentdex. After experimenting with different convolutional neural networks, NVIDIA's PilotNet is chosen due to its faster prediction rate. YOLOv3, one of the most popular object detection algorithm, is also integrated with PilotNet to add steering, throttle, and brake control as per traffic density.
- Grand Theft Auto-5 (turn on hood camera)
- Python 3.6
- Tensorflow
- Keras
- OpenCV
- Numpy
- Pygame
- Xbox 360 Emulator(https://www.x360ce.com/)
- vJoy (http://vjoystick.sourceforge.net/site/)
100,000 images with respective steering angle and throttle is collected by driving the car on the highway. However, only 39,046 images are left after balancing the data, therefore the dataset is artificially expanded by flipping the image along the horizontal axis, and multiplying the steering angle by -1.
Original dataset with 100,000 images used to trained this model can be found here
Use collect_data.py to generate your custom dataset. Ensure the GTA-5 window size same as in the collect_data.py Upload the collected data on Google Drive to train your model on Google Colab.
Training code can be found here
- Upload the training data on Google Drive
- Create a Google Colab project, ensure GPU is enabled
- Upload the .ipynb file under training_colab to Google Colab
- Run the code
Trained model, and YOLOv3 weights can be found here
- Download and add Xbox 360 Emulator where GTA is installed
- Install vJoy
- Set vJoy in Xbox 360 Emulator
- Download or clone this repo
- Run test_model_steer.py
- Collect diverse dataset
- Increase resolution of the images in the dataset
- Use CNN+LSTM to train the model
Sentdex: https://www.youtube.com/playlist?list=PLQVvvaa0QuDeETZEOy4VdocT7TOjfSA8a