Skip to content
Eddie Murphy edited this page Oct 17, 2019 · 15 revisions

Welcome to the opencv-python-traffic-lights wiki!

How It Works: I’ll use image 10 as a demonstration of how my solution works:

First, I split the original image into left and right hand sides to make it easier to segment the two traffic lights. This assumes that there are only one traffic light on each side of image (and the visible street). Therefore this approach fails if there are multiple traffic lights (all of significant enough size in the image) on one side. I no longer crop the images vertically (by cutting out the lower half) as per the assignment directions.

I then apply mean shift segmentation to the image. I realized that some sort of smoothing might have been a good idea before this but I never implemented smoothing.

I now create a grayscale version of the image for edge detection but as I later realized, I believe the canny function has a built-in grayscale conversion in it.

Then I use canny:

Now I find the contours in the image. I found that findContours on the edge version of the image worked better than findContours on the grayscale version of the image.

I then look for circular regions within the image.

Side Note: I tried the hough circle function however I found the parameter values I was given it were magical numbers and highly unstable ones at that. Slight variations in the values created widely different results. And using the tweaked values from one image on another image seemed to result in poor results. I definitely spent too much time trying to get hough circle detection to work.

I settled on use the minEnclosingCircle function instead. And while I did have to tweak certain conditional values, I found the results were far more repeatable and stable.

End of side note.

Again, I am now finding circular regions within the image. I use the minimum enclosing circular region function to find regions that can be enclosed by a circle.

I compare the region inside the minimum enclosing circle to a “perfect” filled-in circle region generated from the radius and center of the minimum enclosing circle. I use the matchShapes function to compare the two.

I also calculate the mean intensity of the region inside the minimum enclosing circle.

I then run a conditional statement on the circular region that takes in values of the size of the radius of the minimum enclosing circle, the mean intensity value, and the matchShape value. Ideally, I have a range of expected radius from testing, and expect the mean intensity to be somewhat high for an illuminated traffic light, and I expect the match shape value to be pretty good for an almost circular traffic light.

However, the parameters for this conditional statement are not as stable as I would like and required a lot of tweaking. I think it works better than the hough circle approach I mentioned earlier, however, I doubt my code works on other images outside of the 14 image data set.

Plotting the found circular regions for the left image ( For the purposes of this demonstration I now only focus on the left hand side):

Now I look for rectangular regions that contain a circular region found above. I also compare the shape of the rectangular region to a “perfect” rectangle.

Since multiple rectangle with circular regions can be found I select the rectangular region that had the best match value with a “perfect” rectangle. Ideally, I would also see how well the circulars matched inside the rectangular region and somehow aggregate these two match values and then compare between the different rectangular regions.

If multiple circular regions exist in the finally selected rectangular region, I then select the circular region that had the best match value with a “perfect” circle.

At this point I can draw the rectangle I found, thereby showing I have identified where the traffic light is. In the image below, I only ran the analysis on the left-hand side, but the same process would be done on the right-hand side.

Now I have to determine the color of the light. I suppose I could have used template matching, under the assumption that non-illuminated traffic light circles (in this case, the yellow and red circle light) would not produce significant enough circle objects in the canny image. However, I decided instead to use backprojection (which was found to be less reliable than I would have hoped)

I create a mask of the circular region identified earlier and apply it to the original color image.

I now backproject each of my training sets for each color and pick the color that resulted in a probability image with the highest average probability (I am not sure if this is better or worse than thresholding and then counting the number of pixels in the threshold binary image).

Here are my training sets (I’ve uploaded them under the png file names trainforyellow2, trainfor12610, and trainforred):

I noticed a lot of variation in the green colors, hence the stitched together green train set.

For image 10, my code outputs a result of green which is correct.

Unfortunately I have yet (but possibly in the future!) to write code to programmatically determine the success metrics of my approach. I generally know from my own testing that a bug exists for certain images. For some reason the circular region detected on the traffic light is ignored when finding rectangular regions with circular regions inside them (hence the traffic light is never detected). Also in certain images, I do not detect the circular region of the traffic light at all. I probably wrote my code to collect circular regions incorrectly and it happens to fail in certain circumstances, although I did not have enough time to figure out the cause.

My code will most likely fail when the color of light is too washed out (which maybe be enough cause to design a template matcher for this part) and when other circular colored objects exist in the scene, such as the rear lights of the car in front (which vertically cropping was initially meant to solve but was not used after the assignment directions said to not do so). Also, as previously mentioned, the arbitrary numbers I used in my conditional statements to find accurate recognition results of circular and rectangular regions were found by looking at these 14 images, and thus my approach will most likely fail with images outside of this 14 image data set (it even fails with images within it too).

I initially use a multi-scale template matching approach however I abandoned it because: I continued to improve upon it but found that the image resolution of the source (template) file limited my ability to detect very small traffic lights. Also, there was the issue of determining the color of the lights, as template matching with a red light template and a green light template would generally both locate the same rectangle in the image (meaning I would somehow have to choose which of the colored template matches is the correct one, which might have been possible from the match value). And I could use back projection on that and even combine part of my above solution, but I still ran into the problem with some of the small traffic lights not working so I abandoned this approach.

I did try using small hand drawn templates for the very small traffic lights, however, I did not quite figure out how to switch between the high resolution version of the template, and the low resolution version of the template, while also scaling the image and maintaining a record of recorded match values.

Clone this wiki locally