A comprehensive vehicle detection and tracking system using YOLOv8 and multiple tracking algorithms (ByteTrack & DeepSORT) for real-time traffic monitoring and vehicle counting.
- Features
- Demo
- Dataset
- Installation
- Project Structure
- Usage
- Model Training
- Tracking Methods
- Results
- Contributing
- License
-
Multi-class Vehicle Detection: Detect 7 different vehicle types
- Car
- Two Wheeler (Motorcycles, Bikes)
- Auto (Rickshaw)
- Bus
- Truck
- Number Plate
- Blur Number Plate
-
Advanced Tracking:
- ByteTrack implementation
- DeepSORT integration
- Unique vehicle counting
- Real-time ID assignment
-
Flexible Output:
- Save tracked videos
- Export detection results
- Frame-by-frame analysis
The system processes traffic videos and:
- Detects vehicles in each frame
- Assigns unique IDs to tracked vehicles
- Counts vehicles passing through the scene
- Saves annotated output video
Source: Traffic Vehicles Object Detection Dataset
Dataset Statistics:
- Classes: 7 vehicle types
- Split: Train / Validation / Test
- Format: YOLO format (normalized bounding boxes)
- Annotations:
.txtfiles with class_id, x_center, y_center, width, height
Dataset Structure:
Dataset/
├── vehicle.yaml # Dataset configuration
├── images/
│ ├── train/ # Training images
│ ├── val/ # Validation images
│ └── test/ # Test images
└── labels/
├── train/ # Training labels
└── val/ # Validation labels
- Python 3.8 or higher
- CUDA-capable GPU (recommended for training)
- 8GB+ RAM
git clone https://github.com/nicekid1/traffic-detection.git
cd traffic-detectionpython -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepip install ultralytics opencv-python deep-sort-realtime
pip install matplotlib pyyaml pathlib- Download from Kaggle
- Extract to
Dataset/directory - Verify structure matches above
# YOLOv8 base models (auto-downloaded on first use)
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pttraffic-detection/
├── model.ipynb # Model training notebook
├── tracking.ipynb # Vehicle tracking notebook
├── README.md # Project documentation
├── runs/
│ └── detect/
│ ├── train4/ # Training results
│ └── vehicle_detection*/ # Detection results
├── yolov8n.pt # YOLOv8 nano model
├── yolov8s.pt # YOLOv8 small model
└── yolo11n.pt # YOLO11 nano model
Open model.ipynb and run cells sequentially:
from ultralytics import YOLO
# Load pretrained model
model = YOLO('yolov8n.pt')
# Train on custom dataset
results = model.train(
data='Dataset/vehicle.yaml',
epochs=25,
batch=8,
imgsz=320,
device=0,
patience=10
)from ultralytics import YOLO
from pathlib import Path
MODEL_PATH = Path('runs/detect/train4/weights/best.pt')
IMAGE_PATH = Path('Dataset/images/val/00 (189).jpg')
model = YOLO(MODEL_PATH)
results = model(IMAGE_PATH)
results[0].save('result.jpg')Open tracking.ipynb:
from ultralytics import YOLO
from pathlib import Path
MODEL_PATH = Path('runs/detect/train4/weights/best.pt')
VIDEO_PATH = Path('Dataset/images/val/Video1.mp4')
model = YOLO(MODEL_PATH)
# Track and count vehicles
counted_ids = set()
for result in model.track(source=VIDEO_PATH, stream=True,
tracker="bytetrack.yaml"):
if result.boxes.id is not None:
track_ids = result.boxes.id.int().tolist()
for tid in track_ids:
if tid not in counted_ids:
counted_ids.add(tid)
print(f"New vehicle: ID={tid} → Total={len(counted_ids)}")
print(f"Total vehicles: {len(counted_ids)}")from deep_sort_realtime.deepsort_tracker import DeepSort
import cv2
tracker = DeepSort(max_age=30, n_init=3, nn_budget=70)
cap = cv2.VideoCapture(str(VIDEO_PATH))
# ... (see tracking.ipynb for full implementation)# Process multiple videos
video_paths = ['video1.mp4', 'video2.mp4', 'video3.mp4']
for video_path in video_paths:
results = model.track(source=video_path, save=True)# Dataset config (vehicle.yaml)
path: Dataset
train: images/train
val: images/val
nc: 7 # number of classes
names: ['Car', 'Number Plate', 'Blur Number Plate',
'Two Wheeler', 'Auto', 'Bus', 'Truck']- Model: YOLOv8n (nano) - lightweight and fast
- Epochs: 25
- Batch Size: 8
- Image Size: 320x320
- Patience: 10 (early stopping)
- Device: GPU (CUDA)
runs/detect/train4/
├── weights/
│ ├── best.pt # Best model
│ └── last.pt # Last checkpoint
├── results.png # Training curves
├── confusion_matrix.png
└── val_batch0_pred.jpg # Validation predictions
- Pros: Fast, accurate, simple
- Best for: Real-time applications
- Config:
bytetrack.yaml
- Pros: Robust re-identification, handles occlusions
- Best for: Complex scenes with occlusions
- Parameters:
max_age=30: Max frames to keep track aliven_init=3: Frames before confirming tracknn_budget=70: Maximum gallery size
- Training Time: ~25 epochs
- mAP: Check
runs/detect/train4/results.png - Inference Speed: ~30-60 FPS (depending on hardware)
- Unique Vehicle Counting: ✅ Accurate
- ID Consistency: ✅ Stable across frames
- Multi-class Support: ✅ All 7 classes
1. CUDA Out of Memory
# Reduce batch size
model.train(batch=4, imgsz=320)2. Video Not Playing
# Check OpenCV installation
import cv2
print(cv2.__version__)3. Tracking IDs Jumping
# Adjust tracker confidence
model.track(source=video, conf=0.3, iou=0.5)Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit changes (
git commit -m 'Add AmazingFeature') - Push to branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Dataset: Saumya Patel on Kaggle
- YOLOv8: Ultralytics
- DeepSORT: nwojke
- ByteTrack: ifzhang
Author: Ali Mohtrami Repository: github.com/nicekid1/traffic-detection
If you found this project helpful, please consider giving it a star!
Made with care for Traffic Monitoring