Skip to content
U-Net for biomedical image segmentation
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
logo
pytorch_unet
.gitignore
.travis.yml
LICENSE
README.md
setup.cfg
setup.py
tox.ini

README.md

RadNet

Package for bio-medical image segmentation.

Open Source Love GitHub Python 3.6 WorkInProgress Build Status Code Quality Coverage Status GitHub Releases GitHub Stars LinkedIn

Getting StartedTrainTestInterpretPerformanceRelease NotesUpcoming ReleasesCitationFAQBlog

Made by Mukesh Mithrakumar • 🌌 https://mukeshmithrakumar.com

What is it

**RadNet** is an ensemble convolutional neural network package (using U-Net, VGG, and Resnet) for biomedical image detection, segmentation and classification.

Currently the code works for the ISBI Neuronal Stack Segmentation dataset. See Release Notes for the current release features and see Upcoming Releases for the next release enhancements.

If this repository helps you in anyway, show your love ❤️ by putting a ⭐️ on this project ✌️

Please Note that since this is a developer release content is being constantly developed and till I test everything completely I won't be committing updates into the repo so if you run into any issues, please reach out. The best way to prevent this is to use the released source for developing.

📋 Getting Started

📀 Software Prerequisites:

To see the software prerequisites (click to expand...)
```
- pip install 'matplotlib'
- pip install 'graphviz'
- pip install 'tensorflow'
- pip install 'scikit-learn'
- pip install 'tifffile'
- pip install 'Pillow'
- pip install 'scipy'
- pip install 'numpy'
- pip install 'opencv-python>=3.3.0'
- pip install 'torch'
- pip install 'torchvision'
- pip install 'pytest'
- pip install 'flake8'
- pip install 'cython'
- pip install 'psutil'
```

💻 Hardware Prerequisites:

Runs on a NVIDIA GeForce GTX 1050 Ti with 4 GB GDDR5 Frame Buffer and 768 NVIDIA CUDA® Cores.

📘 Folder Structure

To see the folder structure (click to expand...)
```
main_dir
- data (The folder containing data files for training and testing)
- pytorch_unet (Package directory)
    - model (PyTorch u-net model)
        - u_net.py
    - optimize
        - c_extensions.pyx
        - config.py
        - hyperparameter.py
        - multi_process.py
        - performance.py
    - processing
        - augments.py
        - load.py
    - trainer
        - evaluate.py
        - interpret.py
        - train.py
    - utils
        - helpers.py
        - metrics.py
        - unit_test.py
    - visualize
        - logger.py
        - plot.py
- train_logs (will be created)
- visualize (will be created)
- weights (will be created)
```

🔧 Install

Currently you can clone the repo and start building, mean while, am working on the PyPi release, so will be updated

⌛️ Train

▴ Back to top

Train the model by running:

train.py root_dir(path/to/root directory)

Arguments that can be specified in the training mode:

usage: train.py [-h] [--main_dir MAIN_DIR] [--resume] [-v]
                [--weights_dir WEIGHTS_DIR] [--log_dir LOG_DIR]
                [--image_size IMAGE_SIZE] [--batch_size BATCH_SIZE]
                [-e EPOCHS] [-d DEPTH] [--n_classes N_CLASSES]
                [--up_mode {upconv, upsample}] [--augment]
                [--augment_type {geometric, image, both}]
                [--transform_prob TRANSFORM_PROB] [--test_size TEST_SIZE]
                [--log] [-bg]

Script for training the model

optional arguments:
  -h, --help            show this help message and exit
  --main_dir MAIN_DIR   main directory
  --resume              Choose to start training from checkpoint
  -v, --verbose         Choose to set verbose to False
  --weights_dir WEIGHTS_DIR
                        Choose directory to save weights model
  --log_dir LOG_DIR     Choose directory to save the logs
  --image_size IMAGE_SIZE
                        resize image size
  --batch_size BATCH_SIZE
                        batch size
  -e EPOCHS, --epochs EPOCHS
                        Number of training epochs
  -d DEPTH, --depth DEPTH
                        Number of downsampling/upsampling blocks
  --n_classes N_CLASSES
                        Number of classes in the dataset
  --up_mode {upconv, upsample}
                        Type of upsampling
  --augment             Whether to augment the train images or not
  --augment_type {geometric, image, both}
                        Which type of augmentation to choose from: geometric,
                        brightness or both
  --transform_prob TRANSFORM_PROB
                        Probability of images to augment when calling
                        augmentations
  --test_size TEST_SIZE
                        Validation size to split the data, should be in
                        between 0.0 to 1.0
  --log                 Log the Values
  -bg, --build_graph    Build the model graph

📋 Logging

To activate logging of the errors (:default is set as no)

train.py root_dir(path/to/root directory) --log

To see the log in tensorboard follow the log statement after training:

📊 Network Graph

Since Pytorch graphs are dynamic I couldn't yet integrate it with tensorflow but as a quick hack run the following to build a png version of the model architecture (:default is set as no)

train.py root_dir(path/to/root directory) -bg
To see the output of the graph (click to expand...)

⌚️ Test

▴ Back to top

Evaluate the model on the test data by running:

evaluate.py root_dir(path/to/root directory)

Arguments that can be specified in the evaluation mode:

usage: evaluate.py [-h] [--main_dir MAIN_DIR] [--image_size IMAGE_SIZE]
                   [--weights_dir WEIGHTS_DIR]

Script for evaluating the trained model

optional arguments:
  -h, --help            show this help message and exit
  --main_dir MAIN_DIR   main directory
  --image_size IMAGE_SIZE
                        resize image size to match train image size
  --weights_dir WEIGHTS_DIR
                        Choose directory to save weights model

📉 Interpret

▴ Back to top

Visualize the intermediate layers by running:

interpret.py root_dir(path/to/root directory)

Arguments that can be specified in the interpret mode:

usage: interpret.py [-h] [--main_dir MAIN_DIR]
                    [--interpret_path INTERPRET_PATH]
                    [--weights_dir WEIGHTS_DIR] [--image_size IMAGE_SIZE]
                    [--depth DEPTH]
                    [--plot_interpret {sensitivity,block_filters}]
                    [--plot_size PLOT_SIZE]

Script for interpreting the trained model results

optional arguments:
  -h, --help            show this help message and exit
  --main_dir MAIN_DIR   main directory
  --interpret_path INTERPRET_PATH
                        Choose directory to save layer visualizations
  --weights_dir WEIGHTS_DIR
                        Choose directory to load weights from
  --image_size IMAGE_SIZE
                        resize image size
  --depth DEPTH         Number of downsampling/upsampling blocks
  --plot_interpret {sensitivity,block_filters}
                        Type of interpret to plot
  --plot_size PLOT_SIZE
                        Image size of sensitivity analysis

🔩 Sensitivity Analysis

To do sensitivity analysis run:

interpret.py root_dir(path/to/root directory) --plot_interpret sensitivity

🔩 Block Analysis

To visualize the weight output of each up/down sampling block run:

interpret.py root_dir(path/to/root directory) --plot_interpret block_filters

📈 Performance

▴ Back to top

(Work in Progress)

:octocat: Release Notes

▴ Back to top

💎 0.1.0 Developer Pre-Release (Jan 01 2019)

:octocat: Upcoming Releases

▴ Back to top

Keep an eye out 👀 for Upcoming Releases: Watchers

🔥 0.2.0 Developer Pre-Alpha

🔥 0.3.0 Developer Alpha

  • Biomedical image pre-processing script
  • modifications for the unet to work on MRI data
  • test on the CHAOS Segmentation challenge
  • modifications for the unet to work on CT scan
  • test on the PAVES Segmentation challenge
  • complete unit_test.py for the above
  • Deploy alpha PyPI package

🔥 0.4.0 Developer Alpha

  • Neural architecture search script
  • Classifier to identify between the organs (One U-Net to segment different organs)
  • Separate classifier to identify different cells
  • Deploy alpha PyPI package

🔥 0.5.0 Science/Research Beta

  • Graphical user interface for RadNet
  • Developer and researcher mode for the GUI
  • Abstracted away the deep learning stuff so its not python/deep learning friendly but more like doctor friendly
  • Build into a software package
  • Deploy beta PyPI package

©️ Citation

▴ Back to top

💬 FAQ

▴ Back to top

  • For any questions and collaborations you can reach me via LinkedIn
You can’t perform that action at this time.