Skip to content

MizanMustakim/thesis_work

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Road Damage Detection Using YOLO

The present research examines the development of a road damage detection model and its potential implications on the economy and society. The timely and accurate identification of deteriorated road conditions has the potential to significantly reduce vehicular accidents, improve traffic flow, and ultimately save lives. The adoption of this technology can also yield substantial financial benefits for governmental entities and road management associations by enhancing the prioritization and scheduling of maintenance and repair tasks. Notably, deep learning techniques, particularly the YOLOv7 model, have contributed to notable improvements in detection accuracy, reduced computational complexity, and overall effectiveness. This study involved conducting several experiments comparing the performance of YOLOv5 and YOLOv7 models. The results highlight the superior performance of the YOLOv7 architecture in accurately identifying damaged areas and predicting object categories. The utilisation of the YOLOv7 model in deep learning and computer vision has led to significant advancements in the precision and efficiency of road damage detection models. The findings from a series of experiments comparing YOLOv5 and YOLOv7 models demonstrate the enhanced effectiveness of the YOLOv7 architecture in accurately detecting damaged regions and predicting object categories. Various methodologies, including hyperparameter optimization and image augmentation techniques, were employed to improve precision. The initial experiment incorporated rotation, hue saturation value configuration, image scaling, and flipping techniques resulting in an average accuracy of 79.75% across all test image categories using YOLOv7. However, Experiment 2 revealed limitations in the implementation of the Gaussian Blur method on YOLOv7, leading to a bias toward blurred images and a consequent compromise in overall precision, resulting in an accuracy of 19.75%. In Experiment 3, the YOLOv5 algorithm was used with a refined dataset, yielding an average accuracy of 55.75% across all categories, which was lower than the accuracy achieved in the first experiment using YOLOv7. This study suggests employing the YOLOv7-trained model as a means of detecting road damage, in conjunction with various image augmentation techniques. According to the findings, the model attained an F1 score of 75%. The adoption of this technology in practical applications is justified by its exceptional performance, which provides significant insights into the state of road conditions and enables timely maintenance and repair interventions. It is also recommended to utilise the designated software in this study to facilitate the convenient implementation of the proposed model by end-users. Further research and improvement of the model may reveal its complete capabilities in augmenting road infrastructure administration and guaranteeing more secure and effective transportation systems.

Experiment 1 (YOLOv7)

Labelled Predicted
test_batch2_labels test_batch2_pred

Experiment 2 (YOLOv7 Augmented)

Labelled Predicted
On Testing

Experiment 3 (YOLOv5)

Labelled Predicted
val_batch2_labels val_batch2_pred

Real Time Testing

real_time_Model_Experiment

Designed Computer Software Interface

Pipeline

image

Paper Link

Road Damage Detection Based on Deep Learning

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published