Skip to content

A comprehensive vehicle detection and tracking system using YOLOv8 and multiple tracking algorithms (ByteTrack & DeepSORT) for real-time traffic monitoring and vehicle counting.

Notifications You must be signed in to change notification settings

nicekid1/traffic-detection

Repository files navigation

Traffic Vehicle Detection & Tracking System

A comprehensive vehicle detection and tracking system using YOLOv8 and multiple tracking algorithms (ByteTrack & DeepSORT) for real-time traffic monitoring and vehicle counting.

Python YOLOv8 OpenCV License

Table of Contents

Features

  • Multi-class Vehicle Detection: Detect 7 different vehicle types

    • Car
    • Two Wheeler (Motorcycles, Bikes)
    • Auto (Rickshaw)
    • Bus
    • Truck
    • Number Plate
    • Blur Number Plate
  • Advanced Tracking:

    • ByteTrack implementation
    • DeepSORT integration
    • Unique vehicle counting
    • Real-time ID assignment
  • Flexible Output:

    • Save tracked videos
    • Export detection results
    • Frame-by-frame analysis

Demo

The system processes traffic videos and:

  1. Detects vehicles in each frame
  2. Assigns unique IDs to tracked vehicles
  3. Counts vehicles passing through the scene
  4. Saves annotated output video

Dataset

Source: Traffic Vehicles Object Detection Dataset

Dataset Statistics:

  • Classes: 7 vehicle types
  • Split: Train / Validation / Test
  • Format: YOLO format (normalized bounding boxes)
  • Annotations: .txt files with class_id, x_center, y_center, width, height

Dataset Structure:

Dataset/
├── vehicle.yaml          # Dataset configuration
├── images/
│   ├── train/           # Training images
│   ├── val/             # Validation images
│   └── test/            # Test images
└── labels/
    ├── train/           # Training labels
    └── val/             # Validation labels

Installation

Prerequisites

  • Python 3.8 or higher
  • CUDA-capable GPU (recommended for training)
  • 8GB+ RAM

Step 1: Clone the Repository

git clone https://github.com/nicekid1/traffic-detection.git
cd traffic-detection

Step 2: Create Virtual Environment (Optional but Recommended)

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

Step 3: Install Dependencies

pip install ultralytics opencv-python deep-sort-realtime
pip install matplotlib pyyaml pathlib

Step 4: Download Dataset

  1. Download from Kaggle
  2. Extract to Dataset/ directory
  3. Verify structure matches above

Step 5: Download Pre-trained Weights (Optional)

# YOLOv8 base models (auto-downloaded on first use)
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt

Project Structure

traffic-detection/
├── model.ipynb              # Model training notebook
├── tracking.ipynb           # Vehicle tracking notebook
├── README.md               # Project documentation
├── runs/
│   └── detect/
│       ├── train4/        # Training results
│       └── vehicle_detection*/  # Detection results
├── yolov8n.pt             # YOLOv8 nano model
├── yolov8s.pt             # YOLOv8 small model
└── yolo11n.pt             # YOLO11 nano model

Usage

1. Train Custom Model

Open model.ipynb and run cells sequentially:

from ultralytics import YOLO

# Load pretrained model
model = YOLO('yolov8n.pt')

# Train on custom dataset
results = model.train(
    data='Dataset/vehicle.yaml',
    epochs=25,
    batch=8,
    imgsz=320,
    device=0,
    patience=10
)

2. Test Detection on Image

from ultralytics import YOLO
from pathlib import Path

MODEL_PATH = Path('runs/detect/train4/weights/best.pt')
IMAGE_PATH = Path('Dataset/images/val/00 (189).jpg')

model = YOLO(MODEL_PATH)
results = model(IMAGE_PATH)
results[0].save('result.jpg')

3. Track Vehicles in Video

Open tracking.ipynb:

Method A: ByteTrack (Recommended)

from ultralytics import YOLO
from pathlib import Path

MODEL_PATH = Path('runs/detect/train4/weights/best.pt')
VIDEO_PATH = Path('Dataset/images/val/Video1.mp4')

model = YOLO(MODEL_PATH)

# Track and count vehicles
counted_ids = set()
for result in model.track(source=VIDEO_PATH, stream=True, 
                         tracker="bytetrack.yaml"):
    if result.boxes.id is not None:
        track_ids = result.boxes.id.int().tolist()
        for tid in track_ids:
            if tid not in counted_ids:
                counted_ids.add(tid)
                print(f"New vehicle: ID={tid} → Total={len(counted_ids)}")

print(f"Total vehicles: {len(counted_ids)}")

Method B: DeepSORT

from deep_sort_realtime.deepsort_tracker import DeepSort
import cv2

tracker = DeepSort(max_age=30, n_init=3, nn_budget=70)

cap = cv2.VideoCapture(str(VIDEO_PATH))
# ... (see tracking.ipynb for full implementation)

4. Batch Processing

# Process multiple videos
video_paths = ['video1.mp4', 'video2.mp4', 'video3.mp4']

for video_path in video_paths:
    results = model.track(source=video_path, save=True)

Model Training

Training Configuration

# Dataset config (vehicle.yaml)
path: Dataset
train: images/train
val: images/val

nc: 7  # number of classes
names: ['Car', 'Number Plate', 'Blur Number Plate', 
        'Two Wheeler', 'Auto', 'Bus', 'Truck']

Training Parameters

  • Model: YOLOv8n (nano) - lightweight and fast
  • Epochs: 25
  • Batch Size: 8
  • Image Size: 320x320
  • Patience: 10 (early stopping)
  • Device: GPU (CUDA)

Training Results Location

runs/detect/train4/
├── weights/
│   ├── best.pt          # Best model
│   └── last.pt          # Last checkpoint
├── results.png          # Training curves
├── confusion_matrix.png
└── val_batch0_pred.jpg  # Validation predictions

Tracking Methods

1. ByteTrack

  • Pros: Fast, accurate, simple
  • Best for: Real-time applications
  • Config: bytetrack.yaml

2. DeepSORT

  • Pros: Robust re-identification, handles occlusions
  • Best for: Complex scenes with occlusions
  • Parameters:
    • max_age=30: Max frames to keep track alive
    • n_init=3: Frames before confirming track
    • nn_budget=70: Maximum gallery size

Results

Model Performance

  • Training Time: ~25 epochs
  • mAP: Check runs/detect/train4/results.png
  • Inference Speed: ~30-60 FPS (depending on hardware)

Tracking Performance

  • Unique Vehicle Counting: ✅ Accurate
  • ID Consistency: ✅ Stable across frames
  • Multi-class Support: ✅ All 7 classes

Troubleshooting

Common Issues

1. CUDA Out of Memory

# Reduce batch size
model.train(batch=4, imgsz=320)

2. Video Not Playing

# Check OpenCV installation
import cv2
print(cv2.__version__)

3. Tracking IDs Jumping

# Adjust tracker confidence
model.track(source=video, conf=0.3, iou=0.5)

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit changes (git commit -m 'Add AmazingFeature')
  4. Push to branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Contact

Author: Ali Mohtrami Repository: github.com/nicekid1/traffic-detection


If you found this project helpful, please consider giving it a star!

Made with care for Traffic Monitoring

About

A comprehensive vehicle detection and tracking system using YOLOv8 and multiple tracking algorithms (ByteTrack & DeepSORT) for real-time traffic monitoring and vehicle counting.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published