NeuroScan AI is a deep learning-powered brain tumor segmentation tool that leverages a U-Net architecture to automatically detect and segment brain tumors in MRI scans. This application provides a user-friendly interface for medical professionals and researchers to analyze brain MRI images with high accuracy.
- U-Net Architecture: State-of-the-art segmentation model for medical imaging
- Test-Time Augmentation (TTA): 4-fold ensemble for more robust predictions
- CLAHE Enhancement: Contrast Limited Adaptive Histogram Equalization for improved image quality
- Adaptive Thresholding: Intelligent threshold selection for accurate segmentation
- Post-processing: Morphological operations to refine segmentation masks
- Multi-format Support: Accepts JPG, PNG, TIF, and BMP image formats
- Clinical Visualization: Overlay, heatmap, and binary mask outputs
- Performance Metrics: Tumor area percentage, confidence scores, and severity classification
The application uses a custom U-Net implementation with the following components:
-
Data Preprocessing:
- CLAHE enhancement in LAB color space
- Multi-channel image handling
- Normalization to model requirements
-
Model Architecture:
- Encoder-decoder structure with skip connections
- Double convolution blocks with batch normalization
- 4-level feature extraction (64, 128, 256, 512 features)
-
Inference Pipeline:
- Test-time augmentation (4-fold)
- Adaptive threshold selection using Otsu's method
- Post-processing with morphological operations
-
Visualization:
- Overlay visualization with customizable opacity
- Heatmap generation for confidence visualization
- Binary mask output for further analysis
- Python 3.8 or higher
- PyTorch 2.11.0
- CUDA-compatible GPU (recommended for faster inference)
- Clone the repository:
git clone <repository-url>
cd neuroscan-ai- Create a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Download the pre-trained model:
- Place
unet_brain_tumor.pthin the project root directory
- Place
- Run the Streamlit application:
streamlit run app.py-
Upload a brain MRI scan (JPG, PNG, TIF, or BMP format)
-
Configure analysis settings in the sidebar:
- Enable/disable Test-Time Augmentation
- Adjust overlay opacity
-
Click "Run Segmentation" to process the image
-
View results in the tabs:
- Overlay: Original image with tumor overlay
- Side-by-Side: Original and segmented images
- Heatmap: Confidence probability map
- Mask: Binary segmentation mask
-
Download results using the export buttons
- Architecture: U-Net with skip connections
- Input Size: 256×256 pixels
- Training Data: TCGA-LGG and similar brain MRI datasets
- Output: Binary segmentation mask of tumor regions
- Device: Automatically uses GPU if available, falls back to CPU
-
Preprocessing:
- CLAHE enhancement in LAB color space for improved contrast
- Multi-channel handling for various input formats
-
Inference:
- 4-fold Test-Time Augmentation (horizontal flip + brightness variations)
- Adaptive thresholding using Otsu's method with fallback
-
Post-processing:
- Morphological closing and opening operations
- Minimum area filtering to remove noise
-
Visualization:
- Color-coded overlays (blue for tumor regions)
- Heatmap visualization with Inferno colormap
- Export functionality for all results
- Efficient U-Net implementation with skip connections
- GPU acceleration when available
- Optimized image processing pipeline
- Memory-efficient inference with batch processing
We welcome contributions to improve NeuroScan AI! Please follow these steps:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
Please ensure your code follows the existing style and includes appropriate tests.
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or support, please open an issue on the GitHub repository.