The ever growing number of vehicles on roads, has been a problem in India since a long time. To add to the woes, the Traffic Lights here, are pre timed, not taking into account the Ground Zero conditions no matter how many vehicles there are on a particular street. Hence, we plan to design a complete Adaptive Traffic Light Filtering system, which would assess the situations in real time, and allot time to a certain signal in accordance with the number of vehicles waiting before it, using Image Acquisition and Image Processing techniques. Moreover, we plan to inculcate an algorithm in our design, to detect the Number Plates of the defaulting vehicles which jump Red Lights, and store them for future prosecution.
- Collection of Training Data ( with visible Vehicular congestion) for our Algorithms, which would involve detection of Vehicular Clusters standing before the defaulting marks of Red Light
- Algorithm for Number Plate detection of defaulting vehicles through Image Retargeting.
- Algorithm for efficient and Adaptive Time Allotment to traffic signals in accordance with the vehicular congestion before the signal.
The following image processing algorithms are followed, and the resultant image is as shown below:
- Input Image
- Grayscale and Binary Conversion
- Adaptive Background Subtraction
- Histogram Equalisation
- Thresholding
- Edge Detection using Sobel
- Cleaning up the Image Borders
- Image Dilation
- Density Calculations
- Taking Image Input
- Binarize the Input Image
- Thresholding
- Edge Detection (Taking into account that the column with the most number of Black Pixels is the edge in the Input Image)
- Removal of detected edges from the image by cropping
- Character Detection using Point Detection ( As soon as a Black Pixel arrives while itirating, it indiactes that a character has been detected and column wise scanning starts.)
- Edge detection (We then move on to detect a completely white column to mark the end of the character.)
- Storage of array of Row Numbers where Black Pixels are obtained.
- Obtaining Character width and ommitting characters with widths below a certain threshold to remove false positives.
- Inverting the Dataset Imgages for Character recognition.
- Resizing the dataset images to suit our character images.
- Comparison of the 42x24 = 1008 pixels of the images using pixel to pixel mapping.(The character would have the maximum overlap with it's dataset image ideally)
- Assigning Dataset image name/ number to the character.
- Obtaining the final detected characters.