Contributors: Ashton | Chin Hee | Jansen | Yongquan
This repo was created for the final week of the deep-skilling phase as part of our AI apprenticeship programme (AIAP), where we did a mini-project to help improve AISG's workflow and welfare.
We envision having a moving robot around the office to detect litters, like the example shown below, where a plastic thrash is detected with high confidence of 98%.
This is made possible via an objection detection model trained using the open-source YOLOv5 project from Ultralytics, on a subset of the TACO dataset.
This section gives an overview of how the various training & evaluation metrics changed with an increase in epoch. Two experiments (exp3, exp4) are shown here.
- exp4 gave the higher final evaluation scores (mAP_0.5, mAP_0.5:0.95, precision, recall), thus the resulting model weight (found in
model/yolov5n_taco_best.pt
) is used for inference, and incorporating part of the codebase from PeekDuck - Both training and validation losses looks to be still decreasing, which suggest perhaps more epoch can be done for even better results.
To create a new conda environment & activate it:
conda create -n pkd-litter python=3.8
conda activate pkd-litter
To install PyTorch for windows OS users (currently only tested for this):
pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu113
To install PyTorch for other OS users (not tested):
- Refer to https://pytorch.org/ for the respective command to install PyTorch.
Ensure you have a webcam connected to your computer.
To start the litter detection engine:
peekingduck run
Now a new window pops up which shows where your webcam is pointed at and also detects litters.
To end the session, just do a CTRL C
on the command line, or simply close the pop-up window.
For each session, two items will be created in processed/
folder:
- CSV file
- mp4 file
Work in Progress...