Skip to content

Commit

Permalink
Merge pull request #78 from qupath/0.4
Browse files Browse the repository at this point in the history
Merging v0.4 changes
  • Loading branch information
petebankhead committed Sep 13, 2023
2 parents b733d46 + 74f5832 commit d33c498
Show file tree
Hide file tree
Showing 15 changed files with 212 additions and 84 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@
!.vscode/settings.json

_build/**
venv
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,16 @@

This contains the source for QuPath's documentation, hosted at https://qupath.readthedocs.io

## Building locally

To build this locally, you should first install (possibly in a [venv](https://docs.python.org/3/library/venv.html):
- `sphinx-build`
- `sphinx_rtd_theme`
- `myst_parser`
- `readthedocs-sphinx-search`

As well as the command line tool `Make` (e.g., [GNU Make](https://www.gnu.org/software/make/)).

## License

All original content here is shared under a Creative Commons license ([CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)).
Expand All @@ -18,4 +28,4 @@ In some places, the docs include public images from other sources, e.g. within s
For download links and information about their licenses, see [the Acknowledgements page](https://qupath.readthedocs.io/en/stable/docs/intro/acknowledgements.html).

> All this refers only to the documentation on this repo.
> For license info about the QuPath *software*, see https://github.com/qupath/qupath
> For license info about the QuPath *software*, see https://github.com/qupath/qupath
2 changes: 1 addition & 1 deletion conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@

html_favicon = 'docs/images/QuPath.ico'

release = '0.4.3'
release = '0.4.4'
version = '0.4'

# myst_heading_anchors = 2
Expand Down
8 changes: 4 additions & 4 deletions docs/concepts/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ QuPath is software for **image analysis**.
This section gives a brief overview of digital images, and the techniques and concepts needed to analyze them using QuPath.

:::{tip}
For a more extensive introduction to images and bioimage analysis concepts, see [Analyzing fluorescence microscopy images with ImageJ].
For a more extensive introduction to images and bioimage analysis concepts, see [Introduction to Bioimage Analysis].

```{image} images/analyzing_book.png
:align: center
Expand Down Expand Up @@ -318,8 +318,8 @@ Objects (e.g. annotations, cells) also should remember which plane they belong t
For more sophisticated multidimensional image analysis you might want to turn to other software, such as [Fiji].

[a pixel is not a little square]: http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
[analyzing fluorescence microscopy images with imagej]: https://petebankhead.gitbooks.io/imagej-intro/content/
[blur and the psf]: https://petebankhead.gitbooks.io/imagej-intro/content/chapters/formation_spatial/formation_spatial.html
[analyzing fluorescence microscopy images with imagej]: https://bioimagebook.github.io/
[blur and the psf]: https://bioimagebook.github.io/chapters/3-fluorescence/2-formation_spatial/formation_spatial.html
[fiji]: http://fiji.sc
[ruifrok and johnston]: https://www.ncbi.nlm.nih.gov/pubmed/11531144
[types and bit-depths]: https://petebankhead.gitbooks.io/imagej-intro/content/chapters/bit_depths/bit_depths.html
[types and bit-depths]: https://bioimagebook.github.io/chapters/1-concepts/3-bit_depths/bit_depths.html
2 changes: 2 additions & 0 deletions docs/concepts/objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,12 +74,14 @@ Some of these performance issues have been addressed in v0.2.0, and it is now fe
Nevertheless, working with annotations remains rather more computationally expensive compared to working with detections.
:::

(concepts-tiles)=
:::{admonition} Special examples of detections
In addition to the types defined above, there are two more specialized detection subtypes:

> **Cell objects** <br />
> This has two ROIs - the main one represents the cell boundary, while a second (optional) ROI represents the nucleus.
>
>
> **Tile objects** <br />
> Differs from a standard detection in that a tile has less intrinsic 'meaning' in itself - i.e. it does not directly correspond to a recognizable structure within the image.
> See {doc}`../tutorials/superpixels` for an example of tiles in action.
Expand Down
90 changes: 45 additions & 45 deletions docs/deep/djl.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,22 +79,22 @@ You can alternatively use `import qupath.ext.djl.*`, which imports other classes
For DJL, the information about each model is stored as an `Artifact`.
Here, we access all the artifacts available for object detection, and select the first one.

```groovy
```java
import qupath.ext.djl.*

def artifacts = DjlZoo.listObjectDetectionModels()
def firstArtifact = artifacts[0]
var artifacts = DjlZoo.listObjectDetectionModels()
var firstArtifact = artifacts[0]
println(firstArtifact)
```

We can see a bit more by converting the artifact to JSON:

```groovy
```java
import qupath.ext.djl.*

def artifacts = DjlZoo.listObjectDetectionModels()
def firstArtifact = artifacts[0]
def json = GsonTools.getInstance(true).toJson(firstArtifact)
var artifacts = DjlZoo.listObjectDetectionModels()
var firstArtifact = artifacts[0]
var json = GsonTools.getInstance(true).toJson(firstArtifact)
println(json)
```

Expand Down Expand Up @@ -128,22 +128,22 @@ The `GsonTools.getInstance(true)` means that the JSON will use pretty-printing (
The built-in zoo models are generally intended for 'regular' photos, not microscopy or biomedical images.
The following script takes a model intended for object detection and applies it to an image of a particularly attractive guinea pig contemplating his pellets.

```groovy
```java
import qupath.ext.djl.*

// Allow model to be downloaded if it's not already
boolean allowDownsamples = true

// Get an object detection model from the zoo
def artifacts = DjlZoo.listObjectDetectionModels()
def artifact = artifacts[0]
var artifacts = DjlZoo.listObjectDetectionModels()
var artifact = artifacts[0]

// Load the model
def criteria = DjlZoo.loadModel(artifact, allowDownsamples)
var criteria = DjlZoo.loadModel(artifact, allowDownsamples)

// Apply the detection to the current image
def imageData = getCurrentImageData()
def detected = DjlZoo.detect(criteria, imageData)
var imageData = getCurrentImageData()
var detected = DjlZoo.detect(criteria, imageData)
println "Detected objects: ${detected.orElse([])}"
```

Expand Down Expand Up @@ -184,19 +184,19 @@ These don't generate bounding boxes, but rather classify each pixel.

The following Groovy script applies a semantic segmentation model, and converts the output to QuPath annotations.

```groovy
```java
import qupath.ext.djl.*

// Get a semantic segmentation model
boolean allowDownloads = true
def artifacts = DjlZoo.listSemanticSegmentationModels()
def artifact = artifacts[0]
var artifacts = DjlZoo.listSemanticSegmentationModels()
var artifact = artifacts[0]
println artifact

// Apply the model
def imageData = getCurrentImageData()
def model = DjlZoo.loadModel(artifact, allowDownloads)
def segmented = DjlZoo.segmentAnnotations(
var imageData = getCurrentImageData()
var model = DjlZoo.loadModel(artifact, allowDownloads)
var segmented = DjlZoo.segmentAnnotations(
model,
imageData)
println(segmented.orElse([]))
Expand All @@ -220,32 +220,32 @@ This might be useful for applications such as stain normalization.
However here we'll use the DJL model zoo to instead see our guinea pig depicted in the styles of various artists.
We convert the output into an ImageJ-friendly form.

```groovy
```java
import qupath.ext.djl.*
import ai.djl.Application.CV

// Get all the image generation models with an 'artist' property
// Note that other image generation models may not work (since they expect different inputs)
def artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
artifacts = artifacts.findAll(a -> a.properties.getOrDefault('artist', null))
var artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
artifacts = artifacts.findAll(a -> a.properties.getOrDefault("artist", null))

// Get an image
// Note: this shouldn't be too big! Define a maximum dimension
double maxDim = 1024
def server = getCurrentServer()
var server = getCurrentServer()
double downsample = Math.max(server.getWidth(), server.getHeight()) / maxDim

def request = RegionRequest.createInstance(server, Math.max(1.0, downsample))
def img = server.readRegion(request)
var request = RegionRequest.createInstance(server, Math.max(1.0, downsample))
var img = server.readRegion(request)

// Show all the predictions
for (def artifact : artifacts) {
def artist = artifact.properties['artist']
for (var artifact : artifacts) {
var artist = artifact.properties["artist"]
println("$artist is painting...")
try (def model = DjlZoo.loadModel(artifact, true)) {
try (def predictor = model.newPredictor()) {
try (var model = DjlZoo.loadModel(artifact, true)) {
try (var predictor = model.newPredictor()) {
// Show using ImageJ
def output = DjlZoo.imageToImage(predictor, img)
var output = DjlZoo.imageToImage(predictor, img)
new ij.ImagePlus(artist, output).show()
}
}
Expand Down Expand Up @@ -283,35 +283,35 @@ width: 45%

Alternatively, the output image can be displayed in QuPath as an overlay.
In this case, it is automatically rescaled to cover the full image.
The opacity can be controled using the slider in the toolbar.
The opacity can be controlled using the slider in the toolbar.


```groovy
```java
import qupath.ext.djl.*
import ai.djl.Application.CV
import qupath.lib.gui.viewer.overlays.*

// Get all the image generation models with an 'artist' property
def artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
def artifact = artifacts.find(a -> a.properties['artist'] == 'vangogh')
var artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
var artifact = artifacts.find(a -> a.properties["artist"] == "vangogh")

// Get an image
double maxDim = 1024
def server = getCurrentServer()
def roi = getSelectedROI()
double downsample = Math.max(roi.getBoundsWidth(), roi.getBoundsHeight()) / maxDim;
def request = RegionRequest.createInstance(server.getPath(), downsample, roi)
def img = server.readRegion(request)
var server = getCurrentServer()
var roi = getSelectedROI()
double downsample = Math.max(roi.getBoundsWidth(), roi.getBoundsHeight()) / maxDim
var request = RegionRequest.createInstance(server.getPath(), downsample, roi)
var img = server.readRegion(request)

// Show all the predictions
def artist = artifact.properties['artist']
var artist = artifact.properties["artist"]
println("$artist is painting...")
try (def model = DjlZoo.loadModel(artifact, true)) {
try (def predictor = model.newPredictor()) {
try (var model = DjlZoo.loadModel(artifact, true)) {
try (var predictor = model.newPredictor()) {
// Show as an overlay
def output = DjlZoo.imageToImage(predictor, img)
def viewer = getCurrentViewer()
def overlay = new BufferedImageOverlay(viewer.getOverlayOptions(), request, output)
var output = DjlZoo.imageToImage(predictor, img)
var viewer = getCurrentViewer()
var overlay = new BufferedImageOverlay(viewer.getOverlayOptions(), request, output)
Platform.runLater {viewer.getCustomOverlayLayers().setAll(overlay)}
}
}
Expand Down
Binary file added docs/deep/images/wsinfer.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/deep/images/wsinfer_options.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/deep/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@
djl
bioimage
stardist
wsinfer
```

0 comments on commit d33c498

Please sign in to comment.