Autonomous vehicles are the future of transportation, but safety and full autonomy are still evolving. Our study focuses on occluded object detection to enhance AV perception. We trained the YOLOv5 model using transfer learning, utilizing pre-trained weights from the COCO dataset, on a new dataset from Bangladesh.
- Obtain data
- Annotate images with occluded instances
- Split dataset into train, validation, and test sets
- Preprocess and augment images
- Train three models and tune parameters
- Evaluate and compare results using the test sample
- Refer to figure for an overview of the proposed methodology
(The data augmentation was done using Roboflow)
Used YOLOv5 - small model on pretrained COCO Dataset and done further training on our own dataset.
Our dataset can be downloaded from Google drive by clicking here.
The YOLOv5 repository can be cloned from ultralytics-yolov5
(You can also see these on tensor board by downloading the ipynb file and opening it in jupyter or colab - for some reason git is not showing these graph if we open it)
T. Mostafa, S. J. Chowdhury, M. K. Rhaman and M. G. R. Alam, "Occluded Object Detection for Autonomous Vehicles Employing YOLOv5, YOLOX and Faster R-CNN," 2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 2022, pp. 0405-0410, doi: 10.1109/IEMCON56893.2022.9946565.
