AutoSeg-Localization is a modular and extensible repository designed to guide researchers and practitioners through the semantic segmentation and vehicle localization workflow for autonomous driving applications. It builds on hands-on assignments from the Automated and Connected Driving Challenges (ACDC) MOOC by RWTH Aachen University and extends them into a reproducible research framework.
The repository provides:
- Dockerized environment for reproducibility and portability.
- Jupyter notebooks covering preprocessing, model training, and evaluation.
- Structured folders for datasets, experiments, and literature.
- Roadmap to advance from simple baselines to full-fledged semantic segmentation pipelines.
AutoSeg-Localization/
│
├── docker/
│ ├── Dockerfile
│ ├── requirements.txt
│ └── run.sh
│
├── notebooks/
│ ├── Localization.ipynb
│ ├── assets/
│ ├── datasets/
│ ├── grid_mapping/
│ ├── ipm_assets/
│ └── localization/
│ ├── object_detection/
│ ├── segmentation_utils/
│ ├── tensorflow_datasets/
├── experiments/
│ ├── runs/
│ └── configs/
├── literature/
│ ├── papers/
│ └── summaries.md
├── .gitignore
├── LICENSE
└── README.md
This repository is structured as a step-by-step learning and experimentation path:
-
Data Preparation
- Preprocessing raw datasets (cropping, resizing, augmentations).
- Managing datasets for reproducibility.
-
Model Development
- Implementing baseline models (e.g., U-Net, FCN).
- Exploring advanced architectures (DeepLab, SegNet, Transformer-based).
-
Training & Experimentation
- Defining configs (
experiments/configs/). - Tracking runs and metrics (
experiments/runs/). - Hyperparameter tuning.
- Defining configs (
-
Evaluation & Metrics
- Standard metrics: IoU, pixel accuracy, confusion matrices.
- Visualization of segmentation maps and error distributions.
-
Localization & Sensor Fusion
- Evaluating vehicle trajectory estimation.
- Comparing ground truth vs. estimated poses.
- Analyzing errors in position, yaw, and vehicle frame deviations.
-
Scaling Up (future extensions)
- Incorporating large-scale datasets (Cityscapes, KITTI, nuScenes).
- Adding experiment management tools (MLflow, Weights & Biases).
- Extending to 3D segmentation & multi-modal fusion (LiDAR + camera).
Ensure you have Docker installed.
git clone https://github.com/your-username/Segmentation-Lab.git
cd Segmentation-Labdocker build -t segmentation-lab -f docker/Dockerfile .bash docker/run.shThis mounts your repo into the container and starts Jupyter Lab.
-
localization/Localization Evaluation Notebook
- Evaluates vehicle localization accuracy.
- Compares estimated vs. ground-truth trajectories.
- Analyzes yaw, longitudinal/lateral deviations, and error distributions.
- Visualizes trajectory alignment and error heatmaps.
- Outcome: identifies systematic localization errors and potential improvements.
This work is inspired by and extends assignments from:
Automated and Connected Driving Challenges (ACDC), a Massive Open Online Course (MOOC) on edX.org. Taught by the Institute for Automotive Engineering (ika) of RWTH Aachen University. Enroll for free and learn how to shape future automated and connected mobility!
Additional references and papers are stored in the literature/ folder.
Trajectory comparison (ground-truth vs. estimated):
Segmentation map sample:
This project is licensed under the MIT License.

