This GitHub repository show real-time object detection using a Raspberry Pi, YOLOv5 TensorFlow Lite model, LED indicators, and an LCD display.
This GitHub repository show real-time object detection using a Raspberry Pi, YOLOv5 with TensorFlow Lite framework, LED indicators, and an LCD display. the feature of this project include:
- Show fps for each detection
- Output the class using LED for each class (there is 5 classes: car, person, truck, bus, motorbike)
- Show CPU and temperature of raspberry pi using LCD 16x02.
Below is the following demo video showcasing the Raspberry Pi in action. When real-time object detection processed, video frames show the fps, LED indicators will trun on based on detected classes, and CPU usage and temperature information displayed on the LCD screen.
Model | mAP@50 | mAP@50:5:95 | FPS |
---|---|---|---|
Yolov5s 640px fp32 | 94,7 | 74,1 | 0,5 |
Yolov5s 416px fp32 | 91,8 | 72,5 | 1,1 |
Yolov5s 320px fp32 | 90,5 | 69,8 | 1,87 |
Yolov5n 640px fp32 | 91,4 | 67,3 | 1,5 |
Yolov5n 416px fp32 | 89 | 66,3 | 3,7 |
Yolov5n 320px fp32 | 86,7 | 63,7 | 5,7 |
Yolov5s 640px int-8 | 93,9 | 70,4 | 0,7 |
Yolov5s 416px int-8 | 90,5 | 67,5 | 1,7 |
Yolov5s 320px int-8 | 90,1 | 63,9 | 2,9 |
Yolov5n 640px int-8 | 90,7 | 64,4 | 1,9 |
Yolov5n 416px int-8 | 88,7 | 63,2 | 4,5 |
Yolov5n 320px int-8 | 85,9 | 59,3 | 7,2 |
- Raspberry Pi 4 (I'm using 8 GB version)
- Raspberry Pi OS 11 Bulleyes 64-bit
- Pi Camera v2/v1/Web-Camera
- PCB or PCB Dot
- LCD 16x2 Biru/Blue 1602 SPI I2C
- ✨ Wiring cable ✨
Follow this organized table to establish the proper connections, you can also read the reference here GPIO on Raspberry Pi4.
LED Wiring - Raspberry Pi
Wire Color | GPIO Pin |
---|---|
Red | GPIO 17 |
Green | GPIO 18 |
Yellow | GPIO 23 |
Cyan | GPIO 27 |
White | GPIO 22 |
Black (GND) | GND |
I2C Wiring - Raspberry Pi
Wire Color | Connection |
---|---|
Red | 5V |
Black | GND |
Purple | SDA |
Brown | SCL |
To run this project, you need Python 3.5 or higher installed on your system. Follow these steps to get started:
- Clone the repository and navigate to the project directory: :
git clone https://github.com/kiena-dev/YOLOv5-tensorflow-lite-Raspberry-Pi.git
cd YOLOv5-tensorflow-lite-Raspberry-Pi
- Create a Python virtual environment (optional but recommended):
python3 -m venv venv
- Activate the virtual environment:
source venv/bin/activate
- Install the required dependencies using pip3:
pip3 install -r requirements.txt
Now you have successfully installed the project and its dependencies.
$ python detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
example video
Default (without LED/LCD):
python detect.py --img 320 --weights yolov5n_320px-fp32.tflite --source video_test.mp4
With LED/LCD:
python detect_rpi_led.py --img 320 --weights yolov5n_320px-fp32.tflite --source video_test.mp4
example webcam
Default (without LED/LCD):
python detect.py --img 320 --weights yolov5n_320px-fp32.tflite --source 0
With LED/LCD:
python detect_rpi_led.py --img 320 --weights yolov5n_320px-fp32.tflite --source 0
If you want to train your own model, you can utilize the resource provided below:
Dataset from Roboflow:
Be sure to make use of these resources to train your model and achieve optimal results!
You can change your own class, add or modify in coco128.yaml. modify the code below:
names:
- bus
- mobil
- honda
- orang
- truck
nc: 5
roboflow:
license: CC BY 4.0
project: skripsi-dtmyf
url: https://universe.roboflow.com/devan-naratama-2xq45/skripsi-dtmyf/dataset/2
version: 2
workspace: devan-naratama-2xq45
test: ../test/images
train: /devan/datasets/Skripsi-2/train/images
val: /devan/datasets/Skripsi-2/valid/images
you can change your own class!
Special thanks to the following resources that inspired and contributed to this project: