From dd902637822ab8c3586676d1954d7e01ad9dba10 Mon Sep 17 00:00:00 2001 From: Jirka Date: Mon, 2 Aug 2021 11:08:47 +0200 Subject: [PATCH] CI: yesqa & mdformat * add & apply yesqa * add & apply mdformat --- .github/CODE_OF_CONDUCT.md | 22 +- .github/CONTRIBUTING.md | 76 +++-- .github/ISSUE_TEMPLATE/Bug_report.md | 11 +- .github/ISSUE_TEMPLATE/Feature_request.md | 9 +- .github/PULL_REQUEST_TEMPLATE.md | 12 +- .pre-commit-config.yaml | 14 + README.md | 349 +++++++++++----------- docs/source/conf.py | 2 +- 8 files changed, 266 insertions(+), 229 deletions(-) diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md index c25ed728..1e1a025c 100644 --- a/.github/CODE_OF_CONDUCT.md +++ b/.github/CODE_OF_CONDUCT.md @@ -10,19 +10,19 @@ In the interest of fostering an open and welcoming environment, we as contributo Examples of behavior that contributes to creating a positive environment include: -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members +- Using welcoming and inclusive language +- Being respectful of differing viewpoints and experiences +- Gracefully accepting constructive criticism +- Focusing on what is best for the community +- Showing empathy towards other community members Examples of unacceptable behavior by participants include: -* The use of sexualized language or imagery and unwelcome sexual attention or advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a professional setting +- The use of sexualized language or imagery and unwelcome sexual attention or advances +- Trolling, insulting/derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or electronic address, without explicit permission +- Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities @@ -42,5 +42,5 @@ Project maintainers who do not follow or enforce the Code of Conduct in good fai ## Attribution -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, +This Code of Conduct is adapted from the \[Contributor Covenant\]\[homepage\], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 1987a409..c4a60252 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -7,45 +7,50 @@ Developing Open Source is great fun! =) Here's the long and short of it: 1. Develop your contribution: - * Pull the latest changes from upstream:: + + - Pull the latest changes from upstream:: + ``` git checkout master git pull upstream master ``` - * Create a branch for the feature you want to work on. Since the branch name will appear in the merge message, use a sensible name such as 'transform-speedups':: + + - Create a branch for the feature you want to work on. Since the branch name will appear in the merge message, use a sensible name such as 'transform-speedups':: ``` git checkout -b transform-speedups ``` - * Commit locally as you progress (``git add`` and ``git commit``) + - Commit locally as you progress (`git add` and `git commit`) + 1. To submit your contribution: - * Push your changes back to your fork on GitHub:: + - Push your changes back to your fork on GitHub:: ``` git push origin transform-speedups ``` - * Enter your GitHub username and password (repeat contributors or advanced users can remove this step by connecting to GitHub with SSH. See detailed instructions below if desired). - * Go to GitHub. The new branch will show up with a green Pull Request button - click it. + - Enter your GitHub username and password (repeat contributors or advanced users can remove this step by connecting to GitHub with SSH. See detailed instructions below if desired). + - Go to GitHub. The new branch will show up with a green Pull Request button - click it. + 1. Review process: - * Reviewers (the other developers and interested community members) will write inline and/or general comments on your Pull Request (PR) to help you improve its implementation, documentation and style. Every single developer working on the project has their code reviewed, and we've come to see it as friendly conversation from which we all learn and the overall code quality benefits. Therefore, please don't let the review discourage you from contributing: its only aim is to improve the quality of project, not to criticize (we are, after all, very grateful for the time you're donating!). - * To update your pull request, make your changes on your local repository and commit. As soon as those changes are pushed up (to the same branch as before) the pull request will update automatically. - * `Travis-CI `__, a continuous integration service, is triggered after each Pull Request update to build the code, run unit tests, measure code coverage and check coding style (PEP8) of your branch. The Travis tests must pass before your PR can be merged. If Travis fails, you can find out why by clicking on the "failed" icon (red cross) and inspecting the build and test log. - * A pull request must be approved by two core team members before merging. -## Guidelines + - Reviewers (the other developers and interested community members) will write inline and/or general comments on your Pull Request (PR) to help you improve its implementation, documentation and style. Every single developer working on the project has their code reviewed, and we've come to see it as friendly conversation from which we all learn and the overall code quality benefits. Therefore, please don't let the review discourage you from contributing: its only aim is to improve the quality of project, not to criticize (we are, after all, very grateful for the time you're donating!). + - To update your pull request, make your changes on your local repository and commit. As soon as those changes are pushed up (to the same branch as before) the pull request will update automatically. + - `Travis-CI `\_\_, a continuous integration service, is triggered after each Pull Request update to build the code, run unit tests, measure code coverage and check coding style (PEP8) of your branch. The Travis tests must pass before your PR can be merged. If Travis fails, you can find out why by clicking on the "failed" icon (red cross) and inspecting the build and test log. + - A pull request must be approved by two core team members before merging. -* All code should have tests (see `test coverage`_ below for more details). -* All code should be documented, to the same - `standard `_ as NumPy and SciPy. -* For new functionality, always add an example to the gallery. -* No changes are ever committed without review and approval by two core team members. **Never merge your own pull request.** -* Examples in the gallery should have a maximum figure width of 8 inches. +## Guidelines +- All code should have tests (see `test coverage`\_ below for more details). +- All code should be documented, to the same + `standard `\_ as NumPy and SciPy. +- For new functionality, always add an example to the gallery. +- No changes are ever committed without review and approval by two core team members. **Never merge your own pull request.** +- Examples in the gallery should have a maximum figure width of 8 inches. ## Stylistic Guidelines -* Set up your editor to remove trailing whitespace. Follow `PEP08 `__. Check code with pyflakes / flake8. -* Use numpy data types instead of strings (``np.uint8`` instead of ``"uint8"``). -* Use the following import conventions:: +- Set up your editor to remove trailing whitespace. Follow `PEP08 `\_\_. Check code with pyflakes / flake8. +- Use numpy data types instead of strings (`np.uint8` instead of `"uint8"`). +- Use the following import conventions:: ``` import numpy as np import matplotlib.pyplot as plt @@ -53,31 +58,35 @@ Here's the long and short of it: cimport numpy as cnp # in Cython code ``` -* When documenting array parameters, use ``image : (M, N) ndarray`` and then refer to ``M`` and ``N`` in the docstring, if necessary. -* Refer to array dimensions as (plane), row, column, not as x, y, z. See :ref:`Coordinate conventions ` in the user guide for more information. -* Functions should support all input image dtypes. Use utility functions such as ``img_as_float`` to help convert to an appropriate type. The output format can be whatever is most efficient. This allows us to string together several functions into a pipeline -* Use ``Py_ssize_t`` as data type for all indexing, shape and size variables in C/C++ and Cython code. -* Wrap Cython code in a pure Python function, which defines the API. This improves compatibility with code introspection tools, which are often not aware of Cython code. -* For Cython functions, release the GIL whenever possible, using ``with nogil:``. - +- When documenting array parameters, use `image : (M, N) ndarray` and then refer to `M` and `N` in the docstring, if necessary. +- Refer to array dimensions as (plane), row, column, not as x, y, z. See :ref:`Coordinate conventions ` in the user guide for more information. +- Functions should support all input image dtypes. Use utility functions such as `img_as_float` to help convert to an appropriate type. The output format can be whatever is most efficient. This allows us to string together several functions into a pipeline +- Use `Py_ssize_t` as data type for all indexing, shape and size variables in C/C++ and Cython code. +- Wrap Cython code in a pure Python function, which defines the API. This improves compatibility with code introspection tools, which are often not aware of Cython code. +- For Cython functions, release the GIL whenever possible, using `with nogil:`. ## Testing This package has an extensive test suite that ensures correct execution on your system. The test suite has to pass before a pull request can be merged, and tests should be added to cover any modifications to the code base. -We make use of the `pytest `__ testing framework, with tests located in the various ``tests`` folders. +We make use of the `pytest `\_\_ testing framework, with tests located in the various `tests` folders. -To use ``pytest``, ensure that Cython extensions are built and that +To use `pytest`, ensure that Cython extensions are built and that the library is installed in development mode:: + ``` $ pip install -e . ``` + Now, run all tests using:: + ``` $ pytest -v pyImSegm ``` -Use ``--doctest-modules`` to run doctests. + +Use `--doctest-modules` to run doctests. For example, run all tests and all doctests using:: + ``` $ pytest -v --doctest-modules --with-xunit --with-coverage pyImSegm ``` @@ -86,12 +95,15 @@ For example, run all tests and all doctests using:: Tests for a module should ideally cover all code in that module, i.e., statement coverage should be at 100%. -To measure the test coverage, install `pytest-cov `__ (using ``easy_install pytest-cov``) and then run:: +To measure the test coverage, install `pytest-cov `\_\_ (using `easy_install pytest-cov`) and then run:: + ``` $ coverage report ``` + This will print a report with one line for each file in `imsegm`, detailing the test coverage:: + ``` Name Stmts Exec Cover Missing -------------------------------------------------------------- @@ -102,4 +114,4 @@ detailing the test coverage:: ## Bugs -Please `report bugs on GitHub `_. +Please `report bugs on GitHub `\_. diff --git a/.github/ISSUE_TEMPLATE/Bug_report.md b/.github/ISSUE_TEMPLATE/Bug_report.md index 32c9e602..9b673012 100644 --- a/.github/ISSUE_TEMPLATE/Bug_report.md +++ b/.github/ISSUE_TEMPLATE/Bug_report.md @@ -1,19 +1,20 @@ --- name: Bug report about: Create a report to help us improve - --- ## Description -_[Please provide a general introduction to the issue/proposal.]_ -_[If reporting a bug, attach the entire traceback from Python.]_ +_\[Please provide a general introduction to the issue/proposal.\]_ -_[If proposing an enhancement/new feature, provide links to related articles, reference examples, etc.]_ +_\[If reporting a bug, attach the entire traceback from Python.\]_ +_\[If proposing an enhancement/new feature, provide links to related articles, reference examples, etc.\]_ ## Way to reproduce -_[If reporting a bug, please include the following important information:]_ + +_\[If reporting a bug, please include the following important information:\]_ + - [ ] Code example - [ ] Relevant images (if any) - [ ] Operating system and version diff --git a/.github/ISSUE_TEMPLATE/Feature_request.md b/.github/ISSUE_TEMPLATE/Feature_request.md index 417ad70b..e62870e0 100644 --- a/.github/ISSUE_TEMPLATE/Feature_request.md +++ b/.github/ISSUE_TEMPLATE/Feature_request.md @@ -1,21 +1,20 @@ --- name: Feature request about: Suggest an idea for this project - --- **Is your feature request related to a problem? Please describe.** -_[A clear and concise description of what the problem is. Ex. I'm always frustrated when ...]_ +_\[A clear and concise description of what the problem is. Ex. I'm always frustrated when ...\]_ **Describe the solution you'd like** -_[A clear and concise description of what you want to happen.]_ +_\[A clear and concise description of what you want to happen.\]_ **Describe alternatives you've considered** -_[A clear and concise description of any alternative solutions or features you've considered.]_ +_\[A clear and concise description of any alternative solutions or features you've considered.\]_ **Additional context** -_[Add any other context or screenshots about the feature request here.]_ +_\[Add any other context or screenshots about the feature request here.\]_ diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 9ed2bbc8..27088475 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,6 +1,6 @@ # Description -_[Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.]_ +_\[Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.\]_ Fixes # (issue) @@ -15,16 +15,16 @@ Please delete options that are not relevant. # How Has This Been Tested? -_[Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration]_ +_\[Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration\]_ - [ ] Test A - [ ] Test B **Test Configuration**: -* Firmware version: -* Hardware: -* Toolchain: -* SDK: + +- Firmware version: +- Hardware: +- Toolchain: # Checklist: diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 2eb3d9d8..15e78f85 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -37,6 +37,20 @@ repos: language: python require_serial: false + - repo: https://github.com/executablebooks/mdformat + rev: 0.7.7 + hooks: + - id: mdformat + additional_dependencies: + - mdformat-gfm + - mdformat-black + - mdformat_frontmatter + + - repo: https://github.com/asottile/yesqa + rev: v1.2.3 + hooks: + - id: yesqa + - repo: https://github.com/PyCQA/flake8 rev: 3.9.2 hooks: diff --git a/README.md b/README.md index 1eceed51..99bb3f13 100755 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ [![CI testing](https://github.com/Borda/pyImSegm/workflows/CI%20testing/badge.svg?branch=master&event=push)](https://github.com/Borda/pyImSegm/actions?query=workflow%3A%22CI+testing%22) [![codecov](https://codecov.io/gh/Borda/pyImSegm/branch/master/graph/badge.svg?token=BCvf6F5sFP)](https://codecov.io/gh/Borda/pyImSegm) -[![Codacy Badge](https://api.codacy.com/project/badge/Grade/48b7976bbe9d42bc8452f6f9e573ee70)](https://www.codacy.com/app/Borda/pyImSegm?utm_source=github.com&utm_medium=referral&utm_content=Borda/pyImSegm&utm_campaign=Badge_Grade) +[![Codacy Badge](https://api.codacy.com/project/badge/Grade/48b7976bbe9d42bc8452f6f9e573ee70)](https://www.codacy.com/app/Borda/pyImSegm?utm_source=github.com&utm_medium=referral&utm_content=Borda/pyImSegm&utm_campaign=Badge_Grade) [![CircleCI](https://circleci.com/gh/Borda/pyImSegm.svg?style=svg&circle-token=a30180a28ae7e490c0c0829d1549fcec9a5c59d0)](https://circleci.com/gh/Borda/pyImSegm) [![CodeFactor](https://www.codefactor.io/repository/github/borda/pyimsegm/badge)](https://www.codefactor.io/repository/github/borda/pyimsegm) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Borda/pyImSegm.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Borda/pyImSegm/context:python) @@ -18,20 +18,21 @@ [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Borda/pyImSegm/master?filepath=notebooks) --> ---- +______________________________________________________________________ ## Superpixel segmentation with GraphCut regularisation Image segmentation is widely used as an initial phase of many image processing tasks in computer vision and image analysis. Many recent segmentation methods use superpixels because they reduce the size of the segmentation problem by order of magnitude. Also, features on superpixels are much more robust than features on pixels only. We use spatial regularisation on superpixels to make segmented regions more compact. The segmentation pipeline comprises (i) computation of superpixels; (ii) extraction of descriptors such as colour and texture; (iii) soft classification, using a standard classifier for supervised learning, or the Gaussian Mixture Model for unsupervised learning; (iv) final segmentation using Graph Cut. We use this segmentation pipeline on real-world applications in medical imaging (see [sample images](data-images/). - We also show that [unsupervised segmentation](notebooks/segment-2d_slic-fts-clust-gc.ipynb) is sufficient for some situations, - and provides similar results to those obtained using [trained segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb). +We also show that [unsupervised segmentation](notebooks/segment-2d_slic-fts-clust-gc.ipynb) is sufficient for some situations, +and provides similar results to those obtained using [trained segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb). ![schema](assets/schema_slic-fts-clf-gc.jpg) **Sample ipython notebooks:** -* [Supervised segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb) requires training annotation -* [Unsupervised segmentation](notebooks/segment-2d_slic-fts-clust-gc.ipynb) just asks for expected number of classes -* **partially annotated images** with missing annotation is marked by a negative number + +- [Supervised segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb) requires training annotation +- [Unsupervised segmentation](notebooks/segment-2d_slic-fts-clust-gc.ipynb) just asks for expected number of classes +- **partially annotated images** with missing annotation is marked by a negative number **Illustration** @@ -40,15 +41,15 @@ Image segmentation is widely used as an initial phase of many image processing t Reference: _Borovec J., Svihlik J., Kybic J., Habart D. (2017). **Supervised and unsupervised segmentation using superpixels, model estimation, and Graph Cut.** In: Journal of Electronic Imaging._ - ## Object centre detection and Ellipse approximation An image processing pipeline to detect and localize Drosophila egg chambers that consists of the following steps: (i) superpixel-based image segmentation into relevant tissue classes (see above); (ii) detection of egg center candidates using label histograms and ray features; (iii) clustering of center candidates and; (iv) area-based maximum likelihood ellipse model fitting. - See our [Poster](http://cmp.felk.cvut.cz/~borovji3/documents/poster-MLMI2017.compressed.pdf) related to this work. +See our [Poster](http://cmp.felk.cvut.cz/~borovji3/documents/poster-MLMI2017.compressed.pdf) related to this work. **Sample ipython notebooks:** -* [Center detection](notebooks/egg-center_candidates-clustering.ipynb) consists of center candidate training and prediction, and candidate clustering. -* [Ellipse fitting](notebooks/egg-detect_ellipse-fitting.ipynb) with given estimated center structure segmentation. + +- [Center detection](notebooks/egg-center_candidates-clustering.ipynb) consists of center candidate training and prediction, and candidate clustering. +- [Ellipse fitting](notebooks/egg-detect_ellipse-fitting.ipynb) with given estimated center structure segmentation. **Illustration** @@ -62,9 +63,10 @@ Reference: _Borovec J., Kybic J., Nava R. (2017) **Detection and Localization of Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed approach differs from standard region growing in three essential aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speedup. Second, our method uses learned statistical shape properties which encourage growing leading to plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as energy minimisation and is solved either greedily, or iteratively using GraphCuts. **Sample ipython notebooks:** -* [General GraphCut](notebooks/egg_segment_graphcut.ipynb) from given centers and initial structure segmentation. -* [Shape modeling](notebooks/RG2Sp_shape-models.ipynb) estimation from training examples. -* [Region growing](notebooks/RG2Sp_region-growing.ipynb) from given centers and initial structure segmentation with shape models. + +- [General GraphCut](notebooks/egg_segment_graphcut.ipynb) from given centers and initial structure segmentation. +- [Shape modeling](notebooks/RG2Sp_shape-models.ipynb) estimation from training examples. +- [Region growing](notebooks/RG2Sp_region-growing.ipynb) from given centers and initial structure segmentation with shape models. **Illustration** @@ -73,13 +75,14 @@ Region growing is a classical image segmentation method based on hierarchical re Reference: _Borovec J., Kybic J., Sugimoto, A. (2017). **Region growing using superpixels with learned shape prior.** In: Journal of Electronic Imaging._ ---- +______________________________________________________________________ ## Installation and configuration **Configure local environment** Create your own local environment, for more see the [User Guide](https://pip.pypa.io/en/latest/user_guide.html), and install dependencies requirements.txt contains list of packages and can be installed as + ```bash @duda:~$ cd pyImSegm @duda:~/pyImSegm$ virtualenv env @@ -87,7 +90,9 @@ Create your own local environment, for more see the [User Guide](https://pip.pyp (env)@duda:~/pyImSegm$ pip install -r requirements.txt (env)@duda:~/pyImSegm$ python ... ``` + and in the end terminating... + ```bash (env)@duda:~/pyImSegm$ deactivate ``` @@ -105,23 +110,28 @@ Moreover, we are using python [GraphCut wrapper](https://github.com/Borda/pyGCO) **Compilation** We have implemented `cython` version of some functions, especially computing descriptors, which require to compile them before using them + ```bash python setup.py build_ext --inplace ``` + If loading of compiled descriptors in `cython` fails, it is automatically swapped to use `numpy` which gives the same results, but it is significantly slower. **Installation** The package can be installed via pip + ```bash pip install git+https://github.com/Borda/pyImSegm.git ``` + or using `setuptools` from a local folder + ```bash python setup.py install ``` ---- +______________________________________________________________________ ## Experiments @@ -131,93 +141,92 @@ Short description of our three sets of experiments that together compose single 1. **Center detection and ellipse fitting** 1. **Region growing with the learned shape prior** - ### Annotation tools We introduce some useful tools for work with image annotation and segmentation. -* **Quantization:** in case you have some smooth colour labelling in your images you can remove them with following quantisation script. - ```bash - python handling_annotations/run_image_color_quantization.py \ - -imgs "./data-images/drosophila_ovary_slice/segm_rgb/*.png" \ - -m position -thr 0.01 --nb_workers 2 - ``` -* **Paint labels:** concerting image labels into colour space and other way around. - ```bash - python handling_annotations/run_image_convert_label_color.py \ - -imgs "./data-images/drosophila_ovary_slice/segm/*.png" \ - -out ./data-images/drosophila_ovary_slice/segm_rgb - ``` -* **Visualisation:** having input image and its segmentation we can use simple visualisation which overlap the segmentation over input image. - ```bash - python handling_annotations/run_overlap_images_segms.py \ - -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ - -segs ./data-images/drosophila_ovary_slice/segm \ - -out ./results/overlap_ovary_segment - ``` -* **In-painting** selected labels in segmentation. - ```bash - python handling_annotations/run_segm_annot_inpaint.py \ - -imgs "./data-images/drosophila_ovary_slice/segm/*.png" \ - --label 4 - ``` -* **Replace labels:** change labels in input segmentation into another set of labels in 1:1 schema. - ```bash - python handling_annotations/run_segm_annot_relabel.py \ - -out ./results/relabel_center_levels \ - --label_old 2 3 --label_new 1 1 - ``` - +- **Quantization:** in case you have some smooth colour labelling in your images you can remove them with following quantisation script. + ```bash + python handling_annotations/run_image_color_quantization.py \ + -imgs "./data-images/drosophila_ovary_slice/segm_rgb/*.png" \ + -m position -thr 0.01 --nb_workers 2 + ``` +- **Paint labels:** concerting image labels into colour space and other way around. + ```bash + python handling_annotations/run_image_convert_label_color.py \ + -imgs "./data-images/drosophila_ovary_slice/segm/*.png" \ + -out ./data-images/drosophila_ovary_slice/segm_rgb + ``` +- **Visualisation:** having input image and its segmentation we can use simple visualisation which overlap the segmentation over input image. + ```bash + python handling_annotations/run_overlap_images_segms.py \ + -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ + -segs ./data-images/drosophila_ovary_slice/segm \ + -out ./results/overlap_ovary_segment + ``` +- **In-painting** selected labels in segmentation. + ```bash + python handling_annotations/run_segm_annot_inpaint.py \ + -imgs "./data-images/drosophila_ovary_slice/segm/*.png" \ + --label 4 + ``` +- **Replace labels:** change labels in input segmentation into another set of labels in 1:1 schema. + ```bash + python handling_annotations/run_segm_annot_relabel.py \ + -out ./results/relabel_center_levels \ + --label_old 2 3 --label_new 1 1 + ``` ### Semantic (un/semi)supervised segmentation We utilise (un)supervised segmentation according to given training examples or some expectations. ![vusial debug](assets/visual_img_43_debug.jpg) -* Evaluate superpixels (with given SLIC parameters) quality against given segmentation. It helps to find out the best SLIC configuration. - ```bash - python experiments_segmentation/run_eval_superpixels.py \ - -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ - -segm "./data-images/drosophila_ovary_slice/annot_eggs/*.png" \ - --img_type 2d_split \ - --slic_size 20 --slic_regul 0.25 --slico - ``` -* Perform **Un-Supervised** segmentation in images given in CSV - ```bash - python experiments_segmentation/run_segm_slic_model_graphcut.py \ - -l ./data-images/langerhans_islets/list_lang-isl_imgs-annot.csv -i "" \ - -cfg experiments_segmentation/sample_config.yml \ - -o ./results -n langIsl --nb_classes 3 --visual --nb_workers 2 - ``` - OR specified on particular path: - ```bash - python experiments_segmentation/run_segm_slic_model_graphcut.py \ - -l "" -i "./data-images/langerhans_islets/image/*.jpg" \ - -cfg ./experiments_segmentation/sample_config.yml \ - -o ./results -n langIsl --nb_classes 3 --visual --nb_workers 2 - ``` - ![unsupervised](assets/imag-disk-20_gmm.jpg) -* Perform **Supervised** segmentation with afterwards evaluation. - ```bash - python experiments_segmentation/run_segm_slic_classif_graphcut.py \ - -l ./data-images/drosophila_ovary_slice/list_imgs-annot-struct.csv \ - -i "./data-images/drosophila_ovary_slice/image/*.jpg" \ - --path_config ./experiments_segmentation/sample_config.yml \ - -o ./results -n Ovary --img_type 2d_split --visual --nb_workers 2 - ``` - ![supervised](assets/imag-disk-20_train.jpg) -* Perform **Semi-Supervised** is using the the supervised pipeline with not fully annotated images. -* For both experiment you can evaluate segmentation results. - ```bash - python experiments_segmentation/run_compute-stat_annot-segm.py \ - -a "./data-images/drosophila_ovary_slice/annot_struct/*.png" \ - -s "./results/experiment_segm-supervise_ovary/*.png" \ - -i "./data-images/drosophila_ovary_slice/image/*.jpg" \ - -o ./results/evaluation --visual - ``` - ![vusial](assets/segm-visual_D03_sy04_100x.jpg) +- Evaluate superpixels (with given SLIC parameters) quality against given segmentation. It helps to find out the best SLIC configuration. + ```bash + python experiments_segmentation/run_eval_superpixels.py \ + -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ + -segm "./data-images/drosophila_ovary_slice/annot_eggs/*.png" \ + --img_type 2d_split \ + --slic_size 20 --slic_regul 0.25 --slico + ``` +- Perform **Un-Supervised** segmentation in images given in CSV + ```bash + python experiments_segmentation/run_segm_slic_model_graphcut.py \ + -l ./data-images/langerhans_islets/list_lang-isl_imgs-annot.csv -i "" \ + -cfg experiments_segmentation/sample_config.yml \ + -o ./results -n langIsl --nb_classes 3 --visual --nb_workers 2 + ``` + OR specified on particular path: + ```bash + python experiments_segmentation/run_segm_slic_model_graphcut.py \ + -l "" -i "./data-images/langerhans_islets/image/*.jpg" \ + -cfg ./experiments_segmentation/sample_config.yml \ + -o ./results -n langIsl --nb_classes 3 --visual --nb_workers 2 + ``` + ![unsupervised](assets/imag-disk-20_gmm.jpg) +- Perform **Supervised** segmentation with afterwards evaluation. + ```bash + python experiments_segmentation/run_segm_slic_classif_graphcut.py \ + -l ./data-images/drosophila_ovary_slice/list_imgs-annot-struct.csv \ + -i "./data-images/drosophila_ovary_slice/image/*.jpg" \ + --path_config ./experiments_segmentation/sample_config.yml \ + -o ./results -n Ovary --img_type 2d_split --visual --nb_workers 2 + ``` + ![supervised](assets/imag-disk-20_train.jpg) +- Perform **Semi-Supervised** is using the the supervised pipeline with not fully annotated images. +- For both experiment you can evaluate segmentation results. + ```bash + python experiments_segmentation/run_compute-stat_annot-segm.py \ + -a "./data-images/drosophila_ovary_slice/annot_struct/*.png" \ + -s "./results/experiment_segm-supervise_ovary/*.png" \ + -i "./data-images/drosophila_ovary_slice/image/*.jpg" \ + -o ./results/evaluation --visual + ``` + ![vusial](assets/segm-visual_D03_sy04_100x.jpg) The previous two (un)segmentation accept [configuration file](experiments_segmentation/sample_config.yml) (YAML) by parameter `-cfg` with some extra parameters which was not passed in arguments, for instance: + ```yaml slic_size: 35 slic_regul: 0.2 @@ -239,57 +248,57 @@ In general, the input is a formatted list (CSV file) of input images and annotat **Experiment sequence is the following:** 1. We can create the annotation completely manually or use the following script which uses annotation of individual objects and create the zones automatically. - ```bash - python experiments_ovary_centres/run_create_annotation.py - ``` + ```bash + python experiments_ovary_centres/run_create_annotation.py + ``` 1. With zone annotation, we train a classifier for centre candidate prediction. The annotation can be a CSV file with annotated centres as points, and the zone of positive examples is set uniformly as the circular neighbourhood around these points. Another way (preferable) is to use an annotated image with marked zones for positive, negative and neutral examples. - ```bash - python experiments_ovary_centres/run_center_candidate_training.py -list none \ - -segs "./data-images/drosophila_ovary_slice/segm/*.png" \ - -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ - -centers "./data-images/drosophila_ovary_slice/center_levels/*.png" \ - -out ./results -n ovary - ``` + ```bash + python experiments_ovary_centres/run_center_candidate_training.py -list none \ + -segs "./data-images/drosophila_ovary_slice/segm/*.png" \ + -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ + -centers "./data-images/drosophila_ovary_slice/center_levels/*.png" \ + -out ./results -n ovary + ``` 1. Having trained classifier we perform center prediction composed from two steps: i) center candidate clustering and ii) candidate clustering. - ```bash - python experiments_ovary_centres/run_center_prediction.py -list none \ - -segs "./data-images/drosophila_ovary_slice/segm/*.png" \ - -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ - -centers ./results/detect-centers-train_ovary/classifier_RandForest.pkl \ - -out ./results -n ovary - ``` + ```bash + python experiments_ovary_centres/run_center_prediction.py -list none \ + -segs "./data-images/drosophila_ovary_slice/segm/*.png" \ + -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ + -centers ./results/detect-centers-train_ovary/classifier_RandForest.pkl \ + -out ./results -n ovary + ``` 1. Assuming you have an expert annotation you can compute static such as missed eggs. - ```bash - python experiments_ovary_centres/run_center_evaluation.py - ``` + ```bash + python experiments_ovary_centres/run_center_evaluation.py + ``` 1. This is just cut out clustering in case you want to use different parameters. - ```bash - python experiments_ovary_centres/run_center_clustering.py \ - -segs "./data-images/drosophila_ovary_slice/segm/*.png" \ - -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ - -centers "./results/detect-centers-train_ovary/candidates/*.csv" \ - -out ./results - ``` + ```bash + python experiments_ovary_centres/run_center_clustering.py \ + -segs "./data-images/drosophila_ovary_slice/segm/*.png" \ + -imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \ + -centers "./results/detect-centers-train_ovary/candidates/*.csv" \ + -out ./results + ``` 1. Matching the ellipses to the user annotation. - ```bash - python experiments_ovary_detect/run_ellipse_annot_match.py \ - -info "~/Medical-drosophila/all_ovary_image_info_for_prague.txt" \ - -ells "~/Medical-drosophila/RESULTS/3_ellipse_ransac_crit_params/*.csv" \ - -out ~/Medical-drosophila/RESULTS - ``` + ```bash + python experiments_ovary_detect/run_ellipse_annot_match.py \ + -info "~/Medical-drosophila/all_ovary_image_info_for_prague.txt" \ + -ells "~/Medical-drosophila/RESULTS/3_ellipse_ransac_crit_params/*.csv" \ + -out ~/Medical-drosophila/RESULTS + ``` 1. Cut eggs by stages and norm to mean size. - ```bash - python experiments_ovary_detect/run_ellipse_cut_scale.py \ - -info ~/Medical-drosophila/RESULTS/info_ovary_images_ellipses.csv \ - -imgs "~/Medical-drosophila/RESULTS/0_input_images_png/*.png" \ - -out ~/Medical-drosophila/RESULTS/images_cut_ellipse_stages - ``` + ```bash + python experiments_ovary_detect/run_ellipse_cut_scale.py \ + -info ~/Medical-drosophila/RESULTS/info_ovary_images_ellipses.csv \ + -imgs "~/Medical-drosophila/RESULTS/0_input_images_png/*.png" \ + -out ~/Medical-drosophila/RESULTS/images_cut_ellipse_stages + ``` 1. Rotate (swap) extracted eggs according the larger mount of mass. - ```bash - python experiments_ovary_detect/run_egg_swap_orientation.py \ - -imgs "~/Medical-drosophila/RESULTS/atlas_datasets/ovary_images/stage_3/*.png" \ - -out ~/Medical-drosophila/RESULTS/atlas_datasets/ovary_images/stage_3 - ``` + ```bash + python experiments_ovary_detect/run_egg_swap_orientation.py \ + -imgs "~/Medical-drosophila/RESULTS/atlas_datasets/ovary_images/stage_3/*.png" \ + -out ~/Medical-drosophila/RESULTS/atlas_datasets/ovary_images/stage_3 + ``` ![ellipse fitting](assets/insitu7544_ellipses.jpg) @@ -298,6 +307,7 @@ In general, the input is a formatted list (CSV file) of input images and annotat In case you do not have estimated object centres, you can use [plugins](ij_macros) for landmarks import/export for [Fiji](http://fiji.sc/). **Note:** install the multi-snake package which is used in multi-method segmentation experiment. + ```bash pip install --user git+https://github.com/Borda/morph-snakes.git ``` @@ -305,49 +315,50 @@ pip install --user git+https://github.com/Borda/morph-snakes.git **Experiment sequence is the following:** 1. Estimating the shape model from set training images containing a single egg annotation. - ```bash - python experiments_ovary_detect/run_RG2Sp_estim_shape-models.py \ - -annot "~/Medical-drosophila/egg_segmentation/mask_2d_slice_complete_ind_egg/*.png" \ - -out ./data-images -nb 15 - ``` + ```bash + python experiments_ovary_detect/run_RG2Sp_estim_shape-models.py \ + -annot "~/Medical-drosophila/egg_segmentation/mask_2d_slice_complete_ind_egg/*.png" \ + -out ./data-images -nb 15 + ``` 1. Run several segmentation techniques on each image. - ```bash - python experiments_ovary_detect/run_ovary_egg-segmentation.py \ - -list ./data-images/drosophila_ovary_slice/list_imgs-segm-center-points.csv \ - -out ./results -n ovary_image --nb_workers 1 \ - -m ellipse_moments \ - ellipse_ransac_mmt \ - ellipse_ransac_crit \ - GC_pixels-large \ - GC_pixels-shape \ - GC_slic-large \ - GC_slic-shape \ - rg2sp_greedy-mixture \ - rg2sp_GC-mixture \ - watershed_morph - ``` + ```bash + python experiments_ovary_detect/run_ovary_egg-segmentation.py \ + -list ./data-images/drosophila_ovary_slice/list_imgs-segm-center-points.csv \ + -out ./results -n ovary_image --nb_workers 1 \ + -m ellipse_moments \ + ellipse_ransac_mmt \ + ellipse_ransac_crit \ + GC_pixels-large \ + GC_pixels-shape \ + GC_slic-large \ + GC_slic-shape \ + rg2sp_greedy-mixture \ + rg2sp_GC-mixture \ + watershed_morph + ``` 1. Evaluate your segmentation ./results to expert annotation. - ```bash - python experiments_ovary_detect/run_ovary_segm_evaluation.py --visual - ``` + ```bash + python experiments_ovary_detect/run_ovary_segm_evaluation.py --visual + ``` 1. In the end, cut individual segmented objects comes as minimal bounding box. - ```bash - python experiments_ovary_detect/run_cut_segmented_objects.py \ - -annot "./data-images/drosophila_ovary_slice/annot_eggs/*.png" \ - -img "./data-images/drosophila_ovary_slice/segm/*.png" \ - -out ./results/cut_images --padding 50 - ``` + ```bash + python experiments_ovary_detect/run_cut_segmented_objects.py \ + -annot "./data-images/drosophila_ovary_slice/annot_eggs/*.png" \ + -img "./data-images/drosophila_ovary_slice/segm/*.png" \ + -out ./results/cut_images --padding 50 + ``` 1. Finally, performing visualisation of segmentation results together with expert annotation. - ```bash - python experiments_ovary_detect/run_export_user-annot-segm.py - ``` - ![user-annnot](assets/insitu7545_user-annot-segm.jpg) + ```bash + python experiments_ovary_detect/run_export_user-annot-segm.py + ``` + ![user-annnot](assets/insitu7545_user-annot-segm.jpg) ---- +______________________________________________________________________ ## References For complete references see [BibTex](docs/references.bib). + 1. Borovec J., Svihlik J., Kybic J., Habart D. (2017). **Supervised and unsupervised segmentation using superpixels, model estimation, and Graph Cut.** SPIE Journal of Electronic Imaging 26(6), 061610. [DOI: 10.1117/1.JEI.26.6.061610](http://doi.org/10.1117/1.JEI.26.6.061610). 1. Borovec J., Kybic J., Nava R. (2017) **Detection and Localization of Drosophila Egg Chambers in Microscopy Images.** In: Wang Q., Shi Y., Suk HI., Suzuki K. (eds) Machine Learning in Medical Imaging. MLMI 2017. LNCS, vol 10541. Springer, Cham. [DOI: 10.1007/978-3-319-67389-9_3](http://doi.org/10.1007/978-3-319-67389-9_3). 1. Borovec J., Kybic J., Sugimoto, A. (2017). **Region growing using superpixels with learned shape prior.** SPIE Journal of Electronic Imaging 26(6), 061611. [DOI: 10.1117/1.JEI.26.6.061611](http://doi.org/10.1117/1.JEI.26.6.061611). diff --git a/docs/source/conf.py b/docs/source/conf.py index c7715359..7af92070 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -26,7 +26,7 @@ PATH_ROOT = os.path.realpath(os.path.join(PATH_HERE, PATH_UP)) sys.path.insert(0, os.path.abspath(PATH_ROOT)) -import imsegm # noqa: E402 +import imsegm # -- Project information -----------------------------------------------------