A prototype of Google car which is smart enough to understand road signs and perform accordingly!
THE PROJECT IS A RESULT OF JOINT EFFORTS OF:
The project has been made by three ways:
- CNN Model
- Wireless openCV model
- Wired openCV model
- Test and train data is generated through openCV and ROI(region of interest) is created, desired image is extracted and file is saved in same mode/directory as of file.(refer)
- Then we created a 2 layered convolution neural network, with a fully connected layer and then the output layer.Then we use data augmentation to fit data into the model, through model.fit_generater. Then we saved the model by model_json. (refer)
- We loaded all the weights from the CNN_model file and created a dictionary of the six classes: forward, stop, left, right, slow and danger. (refer).
- MQTT Server was set-up and the detected sign was sent to the node using the paho MQTT library.
- The nodeMCU code is uploded to the micro-controller and thus, the program is executed.
First of all, circles are detected from the background, next three zones are made inside the circle for detecting the sign. The dominant color of the zones are found and the predicted sign is returned. (refer)
- The MQTT server is set up. (refer)
- The code for sending the signal through MQTT is added to the sign detection code explained above. (refer)
- The nodeMCU receives the signal sent by the server and then the micro-controller performs accordingly.
- Using server.server library, the detected sign was printed on the serial monitor on connected port.
- Then the output was read by the node and the speed of motors was thus controlled.(refer)