Skip to content
Tools for whole slide image processing and classification
Jupyter Notebook JavaScript Python Other
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github
data
docs
logo
notebooks
notes
pyvirchow
retrain
scripts
snakemake
tests
.editorconfig
.gitattributes
.gitignore
.travis.yml
AUTHORS.rst
CONTRIBUTING.rst
HISTORY.rst
LICENSE
MANIFEST.in
Makefile
README.rst
requirements.txt
requirements_dev.txt
setup.cfg
setup.py

README.rst

pyvirchow

https://travis-ci.com/saketkc/pyvirchow.svg?token=GsuWFnsdqcXUSp8vzLip&branch=master ./logo/virchow_480x480.jpg

Features

See the Demo or browser all notebooks.

Training InceptionV4 on Tumor/Normal patches

We currently rely on InceptionV4 model for training. It is one of the deepest and most sophesticated models available. Another model we would ideally like to explore is Inception-Resnet, but later.

Step 1. Create tissue masks

pyvirchow create-tissue-masks --indir /CAMELYON16/testing/images/ \
--level 5 --savedir /CAMELYON16/testing/tissue_masks

Step 2. Create annotation masks

pyvirchow create-annotation-masks --indir /CAMELYON16/testing/images/ \
--level 5 --savedir /CAMELYON16/testing/annotation_masks \
--jsondir /CAMELYON16/testing/lesion_annotations_json

Step 3A. Extract tumor patches

pyvirchow extract-tumor-patches --indir /CAMELYON16/testing/images/ \
--annmaskdir /CAMELYON16/testing/annotation_masks \
--tismaskdir /CAMELYON16/testing/tissue_masks \
--level 5 --savedir /CAMELYON16/testing/extracted_tumor_patches

Step 3B. Extract normal patches

pyvirchow extract-normal-patches --indir /CAMELYON16/training/normal \
--tismaskdir /CAMELYON16/training/tissue_masks --level 5 \
--savedir /CAMELYON16/training/extracted_normal_patches

Dataset download

Ftp

You can’t perform that action at this time.