We placed 1st overall and 2nd in the Honda challenge. Huge thank you to all the dedicated work done by the HackOhio team for putting on the event, and to Honda for being a dedicated sponsor.
This readme has the following sections: Overview, How to use files, Details of Implementation
We used computer vision to determine if the driver of a car was distracted or not. If a driver was distracted we would light up red leds, sound a buzzer, and finally generate a vibration to alert the driver. We mounted a webcam in front of the driver to detect if the driver is engaged or distracted.
This file contains the code uploaded to the arduino. It allows us to control the leds, buzzer, and vibrate feature.
This is an overview of the project and contains 2 videos showcasing our work
This is our main script. It runs inference on the video from the webcam. If a driver is distracted it sends a signal to the arduino to activate alarms based on how long the driver is distracted.
This is a toy example used to verify that the methods used in process_video_into_training_data.py works correctly.
Here we use a yolov5 model to automatically generate bounding boxes and labels for our drivers.
Yaml file for training our model.
Our model which we trained and used for inference.
We sat in a parking lot a generated approximatly 1 hour of videos. We had 4 'drivers', each driver recorded ~7.5 minutes of engaged and distracted driving. Since we know that in our dataset the driver is always present we can use yolov5 to detect the driver. If in any frame no driver is detected, then we can throw that frame out, since no bounding box can be reliably generated. We sample every 8th frame of the videos to generate our dataset. Overall we generate and automatically annotate 13,037 frames.
We trained our model using the following colab script. Our hyper parameters were as follows:
--img 640 --batch 16 --epochs 10 --data {dataset.location}/data.yaml --weights yolov5s.pt --cache
We only train for 10 epochs because we start to overfit after that and we had time constraints with getting a rough model so that we can start prototyping https://colab.research.google.com/github/roboflow-ai/yolov5-custom-training-tutorial/blob/main/yolov5-custom-training.ipynb#scrollTo=lXMzd43k-lpJ
Here is an image of the curcuit/breadboard and arduino which we use to alarm the driver if they are distracted.