Skip to content

Stores here is the replicated, refined, optimized and issues fixed version of the 2D Skull Reconstruction using UNet.

Notifications You must be signed in to change notification settings

HealthComputingLab/2DSkullReconstructionInUNet

Repository files navigation

🧠 Skull Reconstruction UNet

Deep learning meets medical imaging: A UNet-based framework for skull reconstruction, image inpainting, and super-resolution tasks.

Research Project | This repository contains production-ready model architectures and training utilities. Large datasets and example notebooks are maintained separately.


🚀 What's Inside

This research codebase provides:

  • 🎯 Training Pipelinemain_run_scan_rebuild.py orchestrates the complete training workflow
  • ✅ Smoke Testingrun_smoke.py validates your environment and dataset configuration
  • ⚙️ Centralized Configconfig.py manages paths, hyperparameters, and training criteria
  • 🏗️ Modular ArchitectureUnet_Architecture/ contains specialized implementations:
    • Image inpainting for skull defect reconstruction
    • Super-resolution for enhanced image quality

📋 Requirements

  • Python 3.8+ (verified compatibility)
  • GPU Recommended — CUDA-capable hardware dramatically accelerates training
  • PyTorch — Install from pytorch.org matching your system

⚡ Quick Setup

Fire up your environment in three steps:

1. Create Virtual Environment

conda create -n skull2d python=3.9 -y
conda avtivate skull2d

2. Install Dependencies

pip install -r requirements.txt

python -c "import torch; print(torch.__version__, torch.cuda.is_available())"
# Install torch to use CUDA for training if "torch.cuda.is_available()" == False

3. (Optional) Install as Package

Make Unet_Architecture importable from anywhere:

pip install -e .

🎛️ Configuration

All settings live in config.py. Out of the box, it expects:

📂 dataset/
   ├─ 📁 train/  → PATH_TRAIN
   └─ 📁 val/    → PATH_VAL

Key parameters you can tune:

  • BATCH_SIZE — Trade memory for speed
  • LEARNING_RATE — Control convergence behavior
  • IMAGE_WIDTH & IMAGE_HEIGHT — Input dimensions
  • Early stopping criteria and epoch limits

💡 Pro tip: If your data lives elsewhere, just update the path variables at the top of config.py.


📁 Dataset Structure

Organize your images like this:

dataset/
  ├─ train/
  │   ├─ skull_001.png
  │   ├─ skull_002.png
  │   └─ ...
  └─ val/
      ├─ skull_val_001.png
      ├─ skull_val_002.png
      └─ ...

Supported formats: .png, .jpg, .jpeg

⚠️ Important: Never commit large datasets! Host them externally (cloud storage, institutional servers) and document download instructions separately.


🧪 Verify Your Setup

Before training, run the smoke test to catch configuration issues early:

python run_smoke.py

What it checks:

  • ✓ Config module imports correctly
  • ✓ Dataset paths point to existing directories
  • ✓ UNet architecture modules load without errors
  • ✓ Python environment has required packages

If everything passes, you're ready to train! 🎉


🏃 Running Training

Launch training from the repository root:

python main_run_scan_rebuild.py

The script will:

  1. Load and validate your dataset
  2. Initialize the UNet architecture
  3. Train with progress monitoring
  4. Save checkpoints automatically
  5. Validate on your validation set

Check the top of main_run_scan_rebuild.py for command-line options and advanced configuration.


🗂️ Project Structure

skull-reconstruction-unet/
├─ 📄 config.py                    # Central configuration hub
├─ 🎯 main_run_scan_rebuild.py    # Training orchestration
├─ ✅ run_smoke.py                 # Environment validation
├─ 📦 Unet_Architecture/           # Model implementations
│   ├─ Image_Painting/             # Inpainting models
│   └─ Super_Resolution/           # SR models
├─ 📂 dataset/                     # Your data goes here
│   ├─ train/
│   └─ val/
└─ 📋 requirements.txt             # Python dependencies

📝 Development Notes

Best Practices:

  • 🔒 Keep datasets external — use .gitignore to exclude dataset/ contents
  • 📊 Track experiments with clear naming conventions
  • 💾 Regularly backup trained model checkpoints
  • 🧬 Document any architectural modifications

Next Steps for Publication:

  • Add LICENSE file (consider MIT, Apache 2.0, or GPL)
  • Include CITATION.cff for academic attribution
  • Create CONTRIBUTING.md for community guidelines
  • Expand documentation with dataset preparation guide

🤝 Contributing

This is research code under active development. Contributions, bug reports, and feature requests are welcome through issues and pull requests.


Built with PyTorch | Powered by UNet | Advancing Medical AI

About

Stores here is the replicated, refined, optimized and issues fixed version of the 2D Skull Reconstruction using UNet.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •