Python hack to measure different areas in microscopy sampels.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
assets
.gitignore
COPYING
README.md
TODO
__init__.py
segmentation.py

README.md

Microscopy image segmentation for area measurement

Tool to measure area in microscopy sampels. E.g. measure background and cell. Can be trained to recognize multiple different area types. Uses a method akin to gaussian mixture models. This is a quick hack for now.

Required tools

Python, numpy, scipy, scikit image and others.

Windows and MacOSX users can install Anaconda to get all required dependencies: http://www.anaconda.com/

Authors

David R. Piegdon dgit@piegdon.de

License

All files in this repository are released unter the GNU General Public License Version 3 or later.

Howto

Segmenting a bunch of images is a 3-step process, with an optional fourth step:

  1. preprocess: Images need to be preprocessed. (NOTE: Expected are tif images as input)

  2. train: The user needs to define models. After that The script will derive a statistical model from these.

  3. apply: The models are applied to all images. Review your results!

  4. Optionally, reapply: You can edit the ..._segmentation.png files and call reapply to change the segmentation results as done in the third step.

After that you can take the machine-readable results from the analysis-summary.csv file.

Example

  • Check out the git repository somewhere. Alternatively, put segmentation.py, __init__.py and the assets subdirectory with all its files into a directory, e.g. repo/.

  • Copy all tif images to be processed into a directory inside this repository/directory, e.g. repo/data/.

  • Open a shell (/terminal/command prompt) and go to the repo/ directory.

  • Run python ./segmentation.py preprocess data.

  • For all models that you want to separate, create a subdirectory inside repo/data/ named model_xyz, where the prefix MUST be model_ and the suffix is named by you. e.g. two models model_background and model_cells.

  • For each model that you want to create, pick some pictures from the PREPROCESSED images (in repo/data/*_preprocessed.png) and mask everything but the model-relevant areas. Save them as PNG to repo/data/model_background. E.g. remove everything but the background and save those into repo/model_background, same for foreground, or any other models that you wanted to define. You CAN use the alpha channel. Everything that is NOT FULLY OPAQUE will be ignored when generating the model.

  • Again in the shell in the directory repo/, run python ./segmentation.py train data. This will learn the models that you defined.

  • When the learning is finished, in the same shell, run python ./segmentation.py apply data

  • Quickly review all ..._analysis.png files. If you find a bad segmentation, you have two choices:

    a) Improve the models that you defined and go back to the train step. That will result in a different interpretation for all of your images.

    b) Manually edit the segmentation of a specific image. To do so, edit the specific ..._segmentation.png file. You MUST stick to the colors used for the different layers. No transparency and no gradients are allowed. When you are done, call python ./segmentation.py reapply data. The important bit here is to REapply, or your edit will be overwritten. This obviously only changes the segmentation you edited.

  • For further analysis use the analysis-summary.csv file. It contains the amount of pixels and area in percent for each model for each file, in a machine-readable way. E.g., you can open this with Excel.

  • Get a beer and enjoy.

In the end, the directory should look like:

repo/
repo/segmentation.py				<< Script to run.
repo/__init__.py				<< Part of the script.
repo/assets/*					<< Part of the script.

repo/data/					<< Directory where you place your data
repo/data/originalimage1.tif			<< original image files
repo/data/originalimage2.tif
repo/data/originalimage3.tif
repo/data/originalimage1_preprocessed.png	<< Preprocessed images generated by preprocess command.
repo/data/originalimage2_preprocessed.png
repo/data/originalimage3_preprocessed.png
repo/data/model_front/sample1.png		<< Samples that you cut out from preprocessed
repo/data/model_front/sample2.png		   images to define models.
repo/data/model_front/sample3.png
repo/data/model_back/sample1.png		<< Samples for other model group.
repo/data/model_back/sample2.png
repo/data/model_back/sample3.png
repo/data/model_front.gmmpickle			<< Raw data of model, as generated by `train` works.
repo/data/model_back.gmmpickle			   You should ignore this.
repo/data/originalimage1_analysis.png		<< Segmentation, as generated by apply command.
repo/data/originalimage2_analysis.png		   Can optionally be edited and reapplied.
repo/data/originalimage3_analysis.png
repo/data/originalimage1_segmentation.png	<< Analysis files for review,
repo/data/originalimage2_segmentation.png	   as generated by apply command.
repo/data/originalimage3_segmentation.png
repo/data/analysis-summary.csv			<< CSV containing final analysis per model per image.