Skip to content

A reproducible research framework for semantic segmentation and vehicle localization in autonomous driving. Includes Docker support, curated notebooks, experiment tracking, and literature for extending into large-scale image segmentation and localization projects.

License

Notifications You must be signed in to change notification settings

infinityengi/AutoSeg-Localization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Guide to Semantic Segmentation & Localization

AutoSeg-Localization is a modular and extensible repository designed to guide researchers and practitioners through the semantic segmentation and vehicle localization workflow for autonomous driving applications. It builds on hands-on assignments from the Automated and Connected Driving Challenges (ACDC) MOOC by RWTH Aachen University and extends them into a reproducible research framework.

The repository provides:

  • Dockerized environment for reproducibility and portability.
  • Jupyter notebooks covering preprocessing, model training, and evaluation.
  • Structured folders for datasets, experiments, and literature.
  • Roadmap to advance from simple baselines to full-fledged semantic segmentation pipelines.

Repository Structure

AutoSeg-Localization/
│
├── docker/
│   ├── Dockerfile
│   ├── requirements.txt
│   └── run.sh
│
├── notebooks/
│   ├── Localization.ipynb
│   ├── assets/
│   ├── datasets/
│   ├── grid_mapping/
│   ├── ipm_assets/
│   └── localization/
│   ├── object_detection/
│   ├── segmentation_utils/
│   ├── tensorflow_datasets/
├── experiments/
│   ├── runs/
│   └── configs/
├── literature/
│   ├── papers/
│   └── summaries.md
├── .gitignore
├── LICENSE
└── README.md

🗺 Roadmap for Semantic Segmentation & Localization

This repository is structured as a step-by-step learning and experimentation path:

  1. Data Preparation

    • Preprocessing raw datasets (cropping, resizing, augmentations).
    • Managing datasets for reproducibility.
  2. Model Development

    • Implementing baseline models (e.g., U-Net, FCN).
    • Exploring advanced architectures (DeepLab, SegNet, Transformer-based).
  3. Training & Experimentation

    • Defining configs (experiments/configs/).
    • Tracking runs and metrics (experiments/runs/).
    • Hyperparameter tuning.
  4. Evaluation & Metrics

    • Standard metrics: IoU, pixel accuracy, confusion matrices.
    • Visualization of segmentation maps and error distributions.
  5. Localization & Sensor Fusion

    • Evaluating vehicle trajectory estimation.
    • Comparing ground truth vs. estimated poses.
    • Analyzing errors in position, yaw, and vehicle frame deviations.
  6. Scaling Up (future extensions)

    • Incorporating large-scale datasets (Cityscapes, KITTI, nuScenes).
    • Adding experiment management tools (MLflow, Weights & Biases).
    • Extending to 3D segmentation & multi-modal fusion (LiDAR + camera).

🐳 Getting Started with Docker

Ensure you have Docker installed.

1. Clone the Repository

git clone https://github.com/your-username/Segmentation-Lab.git
cd Segmentation-Lab

2. Build the Docker Image

docker build -t segmentation-lab -f docker/Dockerfile .

3. Run the Container

bash docker/run.sh

This mounts your repo into the container and starts Jupyter Lab.

4. Open Jupyter

run jupyter lab

Notebooks Overview

  • localization/Localization Evaluation Notebook

    • Evaluates vehicle localization accuracy.
    • Compares estimated vs. ground-truth trajectories.
    • Analyzes yaw, longitudinal/lateral deviations, and error distributions.
    • Visualizes trajectory alignment and error heatmaps.
    • Outcome: identifies systematic localization errors and potential improvements.

Literature & References

This work is inspired by and extends assignments from:

Automated and Connected Driving Challenges (ACDC), a Massive Open Online Course (MOOC) on edX.org. Taught by the Institute for Automotive Engineering (ika) of RWTH Aachen University. Enroll for free and learn how to shape future automated and connected mobility!

Additional references and papers are stored in the literature/ folder.


Example Visuals

Trajectory comparison (ground-truth vs. estimated):

Trajectory Comparison

Segmentation map sample:

Segmentation Example


License

This project is licensed under the MIT License.

About

A reproducible research framework for semantic segmentation and vehicle localization in autonomous driving. Includes Docker support, curated notebooks, experiment tracking, and literature for extending into large-scale image segmentation and localization projects.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published