-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expand annotations to masks workflow #778
Conversation
…c rois, miniature wsis etc. This includes the rgb, contours, vis, etc. at the same prespecific magnification of micron per pixel resolution
@manthey the function It would be a very nice endpoint to have though, as @cooperlab suggested, as it allows direct use of the data by any user/developer. They just draw annotations, then use this method to do the magic and parse everything into a dataset they can directly use for training models. |
@manthey Thanks for the recent fixes! Worked like a charm and now the build passes. Feel free to review this and I'll merge when it's done. Cheers! |
merge master updates
merge master updates
…deArchive/HistomicsTK into create-review-gallery
Merge pull request #790 from DigitalSlideArchive/annotation-backup-ca…
This reverts commit 6356310
Expand annot to mask 2
Fix polymerger bug
Mongo to sqlite
Mongo to sqlite
Create review gallery
incorp master updates
histomicstk/annotations_and_masks/annotations_to_masks_handler.py
Outdated
Show resolved
Hide resolved
@manthey OK, I've incorporated your corrections. Feel free to approve or suggest others. Thanks! |
Significantly expand and improve the annotations to masks handler to be directly usable to getting data for training models using manually drawn annotations.
Overview:
This includes tools to parse annotations from an item (slide) into masks to use in training and evaluating imaging algorithms. Two "versions" of this workflow exist:
Get labeled mask for any region in a whole-slide image (user-defined)
Get labeled mask for areas enclosed within special "region-of-interest" (ROI) annotations that have been drawn by the user. This involves mapping annotations (rectangles/polygons) to ROIs and making one mask per ROI.
The user uses a csv file like the one in
histomicstk/annotations_and_masks/tests/test_files/sample_GTcodes.csv
to control pixel values assigned to mask, overlay order of various annotation groups, which groups are considered to be ROIs, etc. Note that we use the girder definition of term "group" here, which is an annotation style indicating a certain class, such as "tumor" or "necrosis".
This adds a lot of functionality on top of API endpoints that get annotations as a list of dictionaries, including handing the following complex situations:
Getting RGB images and labeled masks at the same magnification/resolution
User-defined regions to get, including "cropping" of annotations to desired bounds
Getting user-drawn ROIs, including rotated rectangles and polygons
Overlapping annotations
"Background" class (eg. anything not-otherwise-specified is stroma)
Getting contours and bounding boxes relative to images at the same resolution, to be used to trainign object localization models like Faster-RCNN.
There are four run modes:
wsi: get scaled up/down version of mask of whole slide
min_bounding_box: get minimum box for all annotations in slide
manual_bounds: use given ROI bounds provided by the 'bounds' param
polygonal_bounds: use manually-drawn polygonal (or rectanglar) ROI boundaries
Be sure to checkout the annotations_to_masks_handler.ipynb jupyter notebook for implementation examples.