Running the simulation
Install domini-uim to your desired location
Open the Anaconda shell and navigate to the directory of installation
python normal.pyto see the randomly generated traffic run
In a File Explorer, open the folder
sim3and open the file
monitor.htmlinto a browser to see the live monitor. This folder is present in the installation location.
python ai.pyto let the traffic light system be optimized.
python algo.pyto see the same random traffic (random seeded) with the optimized traffic light system
Again, use the file
sim3/monitor.htmlto see the live monitor for the simulation.
Traffic Camera Feed To Density Value using image processing
Visualising the density evaluation:
In the Anaconda Promt
pip install opencv-python
In the Anaconda Promt navigate to the directory where domini-uim is installed
Use the file
sim3/monitor2to visualize the parameters.
The purpose of this project was to detect the vehicles and keep a count of the same till the traffic is 'static'. Here static traffic is the state of traffic for red light signal. The vehicles are accumulated at the junction during this static period. The vehicles detected at every frame is counted. We have defined a Region Of Interest ie, the area of usable road visible by traffic camera. We divide the count of vehicles with the area of ROI to get the density of that road(edge).
In order to get a count we first need to differentiate the vehicles with the road (background). To do this image processing methods were used using open CV. We consider an image to be an array of numbers (one value per pixel). To do this we first need to convert the RGB(Red, Green, Blue) channels to HSV(Hue, Saturation, Value) channels. Value channel show a difference between the Vehicle/No Vehicle conditions. We have then defined a Region Of Interest to get accuracy and reduce noise.
We can use this information to determine what is background and what is a vehicle. But to do this we need a backgroud image ie, empty road. Now that we have the background image we can use it to compare and know when the pixel value goes above the 'threshold value'. We assume that it occurs when there is vehicle in the pixel. We set those distinguished pixels to it's maximum value
Once the blobs/shapes are created, we must then check the shapes (or contours) to determine which are most likely to be vehicles before dismissing those that are not. We can do this by implementing a condition where we are only interested in the detected contours if they are over a certain size. Note that this will change depending on the video feed.
The vehicle counter consist of two classes, one named Vehicle which is used to define each vehicle object, and the other Vehicle Counter which determines which 'vehicles' are valid before counting them. Vehicle Counter also determines the vector movement of each tracked vehicle from frame to frame, giving an indicator of what movements are true and which are false matches. We do this to make sure we're not incorrectly matching vehicles. If a vehicle object satisfies the above criteria, we count that vehicle which is used to get the density of the traffic.
We determine the density of the vehicles in the road in question by dividing the no of counts by the area of the ROI in a ststic traffic state. By static traffic we tend mean the duration of traffic when the red light signal is on.