Skip to content
No description, website, or topics provided.
Python
Branch: master
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Model
Test-images
dataset/split_data
Crash.jpeg
GUI_ADDD.py
GUI_ADDD.ui
Help.py
Help.ui
README.md
Safety.jpeg
Safety1.jpeg
Safety2.png
Upload_icon.png
extract_vgg16_features.py
helper.py
icons.py
icons.qrc
predict.py
train.py

README.md

Driver-Safety-Interface

Need

India is the no.1 contributor to global road crash mortality and morbidity figures. Every hour, around 16 lives are lost to road crashes in India. In the last decade alone, India lost 1.3 million people to road crashes and another 5.3 million were disabled for life, and yet, drivers fail to give up risky habits. We can work on completing the GUI component and integrating it with user inputs. We can develop an end product using IoT and this Model in which:

• While driving, the driver's behaviour may be continuously monitored through 2-D pictures clicked by a camera placed on the dashboard (pi-camera), and the driver is immediately notified if he/she is found to be distracted (through alert tones using pi).

• Our in-vehicle device can help anyone with a personal and/or financial interests in improving driving safety of friend, family, employees etc. If we prepare database of co-ordinates and time of distraction it may help at Individuals Level : The in-car device combined with web-based analytics can give you accurate feedback on your loved one’s driving safety. Business Level : Many businesses own one or more company vehicles. These vehicles are driven by employees but are company liability. This device and web analytics can give you the peace of mind that your assets are in good hands and being driven responsibly. If you discover distracted driving, you can review the photo evidence and intervene as needed.

Note: This project develops the software component which can be extended to the deployment on hardware.

Technology

• Python

• Image processing

• Deep learning

Dependencies

• Python 3.6.1

• Tensorflow 1.3.0

• Keras 2.1.2

• matplotlib 2.0.2

• numpy 1.12.1

• PyQT5 and QtDesigner (optional : to run Interface designed which is half completed and takes an input image and displays it)

Dataset

Datset is collected from StateFarm's Distracted Driver Detection competition on Kaggle. It has 10 classes for telling the status of driver which include he usage of a mobile phone, eating and drinking, conversation with co-passengers, self-grooming, reading or watching videos and adjusting the radio or music player.

Implementational Detail

Our model is trained on images obtained from a Kaggle Competition sponsored by State Farm Insurance which has 10 distinct classes. Each input image is resized to 224 pixels by 224 pixels , Image processing techniques such as histogram equalization across all 3 channels and randomly rotating images have been done which also serves in data augmentation . We have used transfer learning due to limited dataset and resources, where we have utilized pre-trained VGG-16 net CNN model (over VGG-19,GoogLeNet ,ResNet-50,it produced the best results) and then further trained to learn the idiosyncrasies of our data. We performed Global Average Pooling just before the final output layer at the end .This helps the convolutional neural network to have localization ability despite being trained on images. The accuracy of model for a subset of test data was found to be between 50-60% as model find it difficult to differentiate between the safe driving ,talking to passengers ,and hair and make-up class. The model is trained on Google Colab which uses Tesla K80 GPU.

Running pretrained model:

One model is trained using implementational details as mentioned above. This model can be found in Models folder. The images can be predicted for the different classes using command:

predict.py --image path-to-image

Running Model on your dataset

There are three steps to run the model:

• First download the training data from kaggle and split it into training, test and validation set in 60,20,20 proportion using split.py

• In order to train the model on your dataset use train.py and specify the file paths at respective positions.

• Update predict.py for the newly trained model and use predict.py to know the 5 most accurate class , the trained model can be found in model folder.

Run the GUI by command : python GUI_ADDD.py and click browse button to output an image . The integration part of UI with model is yet to be completed.

You can’t perform that action at this time.