A Tensorflow and Keras backed framework for learned segmentation methods of 3D CT scan volumes. Supported functionality includes training models, running inference and quantifying uncertainty. The main underlying model architecture is V-Net.
mcdn-3d-seg
expects Python >=3.5
. Also, be sure to activate your desired
virtual environment before installing.
Run one of the following to install, depending on if you want GPU support:
# for GPU support
pip install -e .[gpu]
# for CPU only
pip install -e .[cpu]
The JSON file sacred_config.json
specifies sacred experiment
configuration independent of the run configuration.
Specifically, file_observer_base_dir
specifies where sacred stores its run logs. The
default is mcdn-3d-seg/runs/
.
mcdn-3d-seg
contains multiple scripts such as train.py
and infer.py
, each
of which expects a JSON config. The syntax to run one of these scripts is:
python <PATH/TO/SCRIPT>.py with <PATH/TO/CONFIG>.json
The default config is a python dict
, DEFAULT_CONFIG
, in the file ctseg/config.py
.
The JSON config needs to match the syntax of DEFAULT_CONFIG
but only needs to contain
override values.
- create an empty JSON file
- add individual fields from
DEFAULT_CONFIG
or copy/paste the entire config into your JSON file - modify values as appropriate
Important: be sure to update the inputs and outputs as the default values are just placeholders.
Important config parameters:
normalization
: the default is no normalization but you will likely want to change thistrain_config
,test_config
: the inputs and outputs here need to be specified for every experiment.
Note: inference_config
and plot_config
are not used during training.
Once a config has been created, use the config to train via:
python train.py with <PATH/TO/CONFIG>.json
- Select whether you want to resume from the
best
model or thelatest
model. If you saved the entire model (default), then useresume_from
; otherwise useload_weights_from
- Resume training the model:
python train.py with <PATH/TO/CONFIG>.json resume_from=<"best" OR "latest">`
Before running inference, be sure you have specified the correct input/output paths in
inference_config
of the JSON config. Once set, run inference via:
python infer.py with <PATH/TO/CONFIG>.json
To run inference using a different model, add resume_from=<PATH/TO/MODEL>
at the end of the
above command.
Tyler Ganter; Carianne Martinez; Kevin Potter; Jessica Jones; Tyler LaBonte