Deep learning meets medical imaging: A UNet-based framework for skull reconstruction, image inpainting, and super-resolution tasks.
Research Project | This repository contains production-ready model architectures and training utilities. Large datasets and example notebooks are maintained separately.
This research codebase provides:
- 🎯 Training Pipeline —
main_run_scan_rebuild.pyorchestrates the complete training workflow - ✅ Smoke Testing —
run_smoke.pyvalidates your environment and dataset configuration - ⚙️ Centralized Config —
config.pymanages paths, hyperparameters, and training criteria - 🏗️ Modular Architecture —
Unet_Architecture/contains specialized implementations:- Image inpainting for skull defect reconstruction
- Super-resolution for enhanced image quality
- Python 3.8+ (verified compatibility)
- GPU Recommended — CUDA-capable hardware dramatically accelerates training
- PyTorch — Install from pytorch.org matching your system
Fire up your environment in three steps:
conda create -n skull2d python=3.9 -y
conda avtivate skull2dpip install -r requirements.txt
python -c "import torch; print(torch.__version__, torch.cuda.is_available())"
# Install torch to use CUDA for training if "torch.cuda.is_available()" == FalseMake Unet_Architecture importable from anywhere:
pip install -e .All settings live in config.py. Out of the box, it expects:
📂 dataset/
├─ 📁 train/ → PATH_TRAIN
└─ 📁 val/ → PATH_VAL
Key parameters you can tune:
BATCH_SIZE— Trade memory for speedLEARNING_RATE— Control convergence behaviorIMAGE_WIDTH&IMAGE_HEIGHT— Input dimensions- Early stopping criteria and epoch limits
💡 Pro tip: If your data lives elsewhere, just update the path variables at the top of config.py.
Organize your images like this:
dataset/
├─ train/
│ ├─ skull_001.png
│ ├─ skull_002.png
│ └─ ...
└─ val/
├─ skull_val_001.png
├─ skull_val_002.png
└─ ...
Supported formats: .png, .jpg, .jpeg
Before training, run the smoke test to catch configuration issues early:
python run_smoke.pyWhat it checks:
- ✓ Config module imports correctly
- ✓ Dataset paths point to existing directories
- ✓ UNet architecture modules load without errors
- ✓ Python environment has required packages
If everything passes, you're ready to train! 🎉
Launch training from the repository root:
python main_run_scan_rebuild.pyThe script will:
- Load and validate your dataset
- Initialize the UNet architecture
- Train with progress monitoring
- Save checkpoints automatically
- Validate on your validation set
Check the top of main_run_scan_rebuild.py for command-line options and advanced configuration.
skull-reconstruction-unet/
├─ 📄 config.py # Central configuration hub
├─ 🎯 main_run_scan_rebuild.py # Training orchestration
├─ ✅ run_smoke.py # Environment validation
├─ 📦 Unet_Architecture/ # Model implementations
│ ├─ Image_Painting/ # Inpainting models
│ └─ Super_Resolution/ # SR models
├─ 📂 dataset/ # Your data goes here
│ ├─ train/
│ └─ val/
└─ 📋 requirements.txt # Python dependencies
Best Practices:
- 🔒 Keep datasets external — use
.gitignoreto excludedataset/contents - 📊 Track experiments with clear naming conventions
- 💾 Regularly backup trained model checkpoints
- 🧬 Document any architectural modifications
Next Steps for Publication:
- Add
LICENSEfile (consider MIT, Apache 2.0, or GPL) - Include
CITATION.cfffor academic attribution - Create
CONTRIBUTING.mdfor community guidelines - Expand documentation with dataset preparation guide
This is research code under active development. Contributions, bug reports, and feature requests are welcome through issues and pull requests.
Built with PyTorch | Powered by UNet | Advancing Medical AI