Skip to content

pythonicshariful/Traffic_Detector_-_Instance_Segmentation

Repository files navigation

Traffic Detector & Instance Segmentation

A small project demonstrating traffic object detection and instance segmentation using Ultralytics YOLOv8 models. This repository contains a template script for running inference, and two pretrained model weight files (yolov8s-seg.pt and yolov8x-seg.pt).

Highlights

  • Run instance segmentation on images or videos to detect vehicles, pedestrians, signs, and other traffic entities.
  • Includes two pretrained weights: lightweight yolov8s-seg.pt (faster) and larger yolov8x-seg.pt (higher accuracy).
  • A template script traffic_detector_template.py is included to help you run inference or integrate into your pipeline.

Repository structure

  • traffic_detector_template.py — example/inference template that shows how to load a YOLOv8 segmentation model and run it on images/videos.
  • yolov8s-seg.pt — small segmentation model weights (fast/inference-friendly).
  • yolov8x-seg.pt — large segmentation model weights (more accurate, heavier).
  • README.md — this file.

Quick contract

  • Inputs: image/video file path or camera stream
  • Outputs: annotated image/video with instance masks and bounding boxes (saved to disk or displayed)
  • Error modes: missing model file, incompatible PyTorch/CPU-only inference, or incorrect script arguments

Requirements

  • Python 3.8+ (3.10/3.11 recommended)
  • PyTorch (see https://pytorch.org for the correct command for your CUDA version)
  • Ultralytics (YOLOv8) or the ultralytics pip package
  • OpenCV for image/video I/O

You can install dependencies in two ways:

Option 1: Install from requirements.txt (recommended)

This will install exact versions that were tested with the project:

python -m venv .venv; .\.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt

Option 2: Manual minimal install

If you want to install just the core dependencies:

python -m venv .venv; .\.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install ultralytics opencv-python
# Install torch according to your environment; example for CPU-only:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu

Note: If you have an NVIDIA GPU, follow the official PyTorch installation instructions to install a CUDA build of torch. The requirements.txt includes CPU versions by default.

Usage (Quickstart)

The repository includes traffic_detector_template.py as a starting point. Here are the common usage patterns:

# Process a video file with display
python traffic_detector_template.py --source traffic.mp4 --output out.mp4 --show

# Process an image file
python traffic_detector_template.py --source image.jpg --output result.jpg --show

# Process a video without display (faster)
python traffic_detector_template.py --source traffic.mp4 --output out.mp4

# Use webcam (0 is default camera) with live display
python traffic_detector_template.py --source 0 --show

Command line arguments:

  • --source: Input source (video file, image file, or camera index)
  • --output: Output file path (optional, skip to not save output)
  • --show: Display results in a window while processing (optional)
  • --weights: Model weights file (defaults to yolov8s-seg.pt if not specified)

The script will process the input source, optionally display the results in real-time if --show is used, and save the output if an output path is specified.

Example output

Instance segmentation example

Using the provided pretrained models

  • yolov8s-seg.pt — choose this for quick tests and real-time or low-resource inference.
  • yolov8x-seg.pt — choose this for higher-quality segmentation when latency is less critical.

Place the model weights in the project root (they are already included). In code you can load them with Ultralytics like:

from ultralytics import YOLO
model = YOLO('yolov8x-seg.pt')
results = model.predict(source='path/to/image.jpg', imgsz=640, conf=0.25)

Training (notes)

This repo does not include a full training pipeline. To train your own model with Ultralytics YOLOv8, prepare a COCO-format dataset or YAML dataset spec and run the yolo train command as documented by Ultralytics.

Tips and troubleshooting

  • If inference is slow on CPU, install a GPU-enabled PyTorch build and ensure drivers/CUDA are correctly installed.
  • If you run into model / package version errors, pin package versions (PyTorch + ultralytics) consistent with your CUDA toolkit.
  • For Windows PowerShell, use the Activate.ps1 script to activate virtual environments.

License & attribution

This project is a small demo and template. The YOLO models are provided by Ultralytics. Check Ultralytics' repository and license for model usage terms.

Contact

If you have questions or want improvements (example: add training scripts, dataset loader, or dockerfile), open an issue or reach out to the repository owner.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages