Skip to content

Commit

Permalink
Merge pull request #75 from alanocallaghan/main
Browse files Browse the repository at this point in the history
Build instructions, book link, typo, fix links + formatting
  • Loading branch information
petebankhead committed Sep 12, 2023
2 parents 0a4f4c2 + bb484a4 commit 3ad5649
Show file tree
Hide file tree
Showing 8 changed files with 65 additions and 54 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@
!.vscode/settings.json

_build/**
venv
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,16 @@

This contains the source for QuPath's documentation, hosted at https://qupath.readthedocs.io

## Building locally

To build this locally, you should first install (possibly in a [venv](https://docs.python.org/3/library/venv.html):
- `sphinx-build`
- `sphinx_rtd_theme`
- `myst_parser`
- `readthedocs-sphinx-search`

As well as the command line tool `Make` (e.g., [GNU Make](https://www.gnu.org/software/make/)).

## License

All original content here is shared under a Creative Commons license ([CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)).
Expand All @@ -18,4 +28,4 @@ In some places, the docs include public images from other sources, e.g. within s
For download links and information about their licenses, see [the Acknowledgements page](https://qupath.readthedocs.io/en/stable/docs/intro/acknowledgements.html).

> All this refers only to the documentation on this repo.
> For license info about the QuPath *software*, see https://github.com/qupath/qupath
> For license info about the QuPath *software*, see https://github.com/qupath/qupath
8 changes: 4 additions & 4 deletions docs/concepts/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ QuPath is software for **image analysis**.
This section gives a brief overview of digital images, and the techniques and concepts needed to analyze them using QuPath.

:::{tip}
For a more extensive introduction to images and bioimage analysis concepts, see [Analyzing fluorescence microscopy images with ImageJ].
For a more extensive introduction to images and bioimage analysis concepts, see [Introduction to Bioimage Analysis].

```{image} images/analyzing_book.png
:align: center
Expand Down Expand Up @@ -318,8 +318,8 @@ Objects (e.g. annotations, cells) also should remember which plane they belong t
For more sophisticated multidimensional image analysis you might want to turn to other software, such as [Fiji].

[a pixel is not a little square]: http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
[analyzing fluorescence microscopy images with imagej]: https://petebankhead.gitbooks.io/imagej-intro/content/
[blur and the psf]: https://petebankhead.gitbooks.io/imagej-intro/content/chapters/formation_spatial/formation_spatial.html
[analyzing fluorescence microscopy images with imagej]: https://bioimagebook.github.io/
[blur and the psf]: https://bioimagebook.github.io/chapters/3-fluorescence/2-formation_spatial/formation_spatial.html
[fiji]: http://fiji.sc
[ruifrok and johnston]: https://www.ncbi.nlm.nih.gov/pubmed/11531144
[types and bit-depths]: https://petebankhead.gitbooks.io/imagej-intro/content/chapters/bit_depths/bit_depths.html
[types and bit-depths]: https://bioimagebook.github.io/chapters/1-concepts/3-bit_depths/bit_depths.html
90 changes: 45 additions & 45 deletions docs/deep/djl.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,22 +79,22 @@ You can alternatively use `import qupath.ext.djl.*`, which imports other classes
For DJL, the information about each model is stored as an `Artifact`.
Here, we access all the artifacts available for object detection, and select the first one.

```groovy
```java
import qupath.ext.djl.*

def artifacts = DjlZoo.listObjectDetectionModels()
def firstArtifact = artifacts[0]
var artifacts = DjlZoo.listObjectDetectionModels()
var firstArtifact = artifacts[0]
println(firstArtifact)
```

We can see a bit more by converting the artifact to JSON:

```groovy
```java
import qupath.ext.djl.*

def artifacts = DjlZoo.listObjectDetectionModels()
def firstArtifact = artifacts[0]
def json = GsonTools.getInstance(true).toJson(firstArtifact)
var artifacts = DjlZoo.listObjectDetectionModels()
var firstArtifact = artifacts[0]
var json = GsonTools.getInstance(true).toJson(firstArtifact)
println(json)
```

Expand Down Expand Up @@ -128,22 +128,22 @@ The `GsonTools.getInstance(true)` means that the JSON will use pretty-printing (
The built-in zoo models are generally intended for 'regular' photos, not microscopy or biomedical images.
The following script takes a model intended for object detection and applies it to an image of a particularly attractive guinea pig contemplating his pellets.

```groovy
```java
import qupath.ext.djl.*

// Allow model to be downloaded if it's not already
boolean allowDownsamples = true

// Get an object detection model from the zoo
def artifacts = DjlZoo.listObjectDetectionModels()
def artifact = artifacts[0]
var artifacts = DjlZoo.listObjectDetectionModels()
var artifact = artifacts[0]

// Load the model
def criteria = DjlZoo.loadModel(artifact, allowDownsamples)
var criteria = DjlZoo.loadModel(artifact, allowDownsamples)

// Apply the detection to the current image
def imageData = getCurrentImageData()
def detected = DjlZoo.detect(criteria, imageData)
var imageData = getCurrentImageData()
var detected = DjlZoo.detect(criteria, imageData)
println "Detected objects: ${detected.orElse([])}"
```

Expand Down Expand Up @@ -184,19 +184,19 @@ These don't generate bounding boxes, but rather classify each pixel.

The following Groovy script applies a semantic segmentation model, and converts the output to QuPath annotations.

```groovy
```java
import qupath.ext.djl.*

// Get a semantic segmentation model
boolean allowDownloads = true
def artifacts = DjlZoo.listSemanticSegmentationModels()
def artifact = artifacts[0]
var artifacts = DjlZoo.listSemanticSegmentationModels()
var artifact = artifacts[0]
println artifact

// Apply the model
def imageData = getCurrentImageData()
def model = DjlZoo.loadModel(artifact, allowDownloads)
def segmented = DjlZoo.segmentAnnotations(
var imageData = getCurrentImageData()
var model = DjlZoo.loadModel(artifact, allowDownloads)
var segmented = DjlZoo.segmentAnnotations(
model,
imageData)
println(segmented.orElse([]))
Expand All @@ -220,32 +220,32 @@ This might be useful for applications such as stain normalization.
However here we'll use the DJL model zoo to instead see our guinea pig depicted in the styles of various artists.
We convert the output into an ImageJ-friendly form.

```groovy
```java
import qupath.ext.djl.*
import ai.djl.Application.CV

// Get all the image generation models with an 'artist' property
// Note that other image generation models may not work (since they expect different inputs)
def artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
artifacts = artifacts.findAll(a -> a.properties.getOrDefault('artist', null))
var artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
artifacts = artifacts.findAll(a -> a.properties.getOrDefault("artist", null))

// Get an image
// Note: this shouldn't be too big! Define a maximum dimension
double maxDim = 1024
def server = getCurrentServer()
var server = getCurrentServer()
double downsample = Math.max(server.getWidth(), server.getHeight()) / maxDim

def request = RegionRequest.createInstance(server, Math.max(1.0, downsample))
def img = server.readRegion(request)
var request = RegionRequest.createInstance(server, Math.max(1.0, downsample))
var img = server.readRegion(request)

// Show all the predictions
for (def artifact : artifacts) {
def artist = artifact.properties['artist']
for (var artifact : artifacts) {
var artist = artifact.properties["artist"]
println("$artist is painting...")
try (def model = DjlZoo.loadModel(artifact, true)) {
try (def predictor = model.newPredictor()) {
try (var model = DjlZoo.loadModel(artifact, true)) {
try (var predictor = model.newPredictor()) {
// Show using ImageJ
def output = DjlZoo.imageToImage(predictor, img)
var output = DjlZoo.imageToImage(predictor, img)
new ij.ImagePlus(artist, output).show()
}
}
Expand Down Expand Up @@ -283,35 +283,35 @@ width: 45%

Alternatively, the output image can be displayed in QuPath as an overlay.
In this case, it is automatically rescaled to cover the full image.
The opacity can be controled using the slider in the toolbar.
The opacity can be controlled using the slider in the toolbar.


```groovy
```java
import qupath.ext.djl.*
import ai.djl.Application.CV
import qupath.lib.gui.viewer.overlays.*

// Get all the image generation models with an 'artist' property
def artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
def artifact = artifacts.find(a -> a.properties['artist'] == 'vangogh')
var artifacts = DjlZoo.listModels(CV.IMAGE_GENERATION)
var artifact = artifacts.find(a -> a.properties["artist"] == "vangogh")

// Get an image
double maxDim = 1024
def server = getCurrentServer()
def roi = getSelectedROI()
double downsample = Math.max(roi.getBoundsWidth(), roi.getBoundsHeight()) / maxDim;
def request = RegionRequest.createInstance(server.getPath(), downsample, roi)
def img = server.readRegion(request)
var server = getCurrentServer()
var roi = getSelectedROI()
double downsample = Math.max(roi.getBoundsWidth(), roi.getBoundsHeight()) / maxDim
var request = RegionRequest.createInstance(server.getPath(), downsample, roi)
var img = server.readRegion(request)

// Show all the predictions
def artist = artifact.properties['artist']
var artist = artifact.properties["artist"]
println("$artist is painting...")
try (def model = DjlZoo.loadModel(artifact, true)) {
try (def predictor = model.newPredictor()) {
try (var model = DjlZoo.loadModel(artifact, true)) {
try (var predictor = model.newPredictor()) {
// Show as an overlay
def output = DjlZoo.imageToImage(predictor, img)
def viewer = getCurrentViewer()
def overlay = new BufferedImageOverlay(viewer.getOverlayOptions(), request, output)
var output = DjlZoo.imageToImage(predictor, img)
var viewer = getCurrentViewer()
var overlay = new BufferedImageOverlay(viewer.getOverlayOptions(), request, output)
Platform.runLater {viewer.getCustomOverlayLayers().setAll(overlay)}
}
}
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/building.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Most people using QuPath won't need to build QuPath from source!
Just download an existing installer from [qupath.github.io](https://qupath.github.io) and use that instead.
:::

## Command line
## Building from the command line

If you're moderately comfortable working from a command line, there's not much required to build QuPath:

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ The config file is inside the *Contents/app* directory.

### Can QuPath be run in batch mode from the command line?

Yes! See {ref}`Command line`.
Yes! See {doc}`../advanced/command_line`.

### Is there a way to make projects self-contained, using the relative paths to images?

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/shortcuts.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ These shortcuts only work whenever the viewer is 'in focus', i.e. it is the last
```

:::{sidebar} Note for Linux users
You may need to replace {kbd}Alt with {kbd}Alt + Super.
You may need to replace {kbd}`Alt` with {kbd}`Alt + Super`.
:::

### The {kbd}`Alt` key
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/pixel_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Some of the options available to customize the classifier during training are th

The options include:

- **Classifier**: The type of the classifier. *Artifical neural networks* are *Random trees* are generally good choices. *K nearest neighbor* can be appropriate if you will train from point annotations only (it can become *very* slow with large training regions). Press {guilabel}`Edit` to have more options for each.
- **Classifier**: The type of the classifier. *Artificial neural networks* and *Random trees* are generally good choices. *K nearest neighbor* can be appropriate if you will train from point annotations only (it can become *very* slow with large training regions). Press {guilabel}`Edit` to have more options for each.
- **Resolution**: Same as with the thresholder: controls the level of detail for the classification (and, relatedly, processing time and memory use).
- **Features**: Customize what information goes into the classifier (more information below).
- **Output**: All available classifiers can output a single classification per pixel. Some can also provide an estimated (pseudo)probability value for *each* available classification. This isn't a true probability, will be rescaled to the range 0-255, and requires more memory -- but can be useful in some cases to assess the confidence of the predictions.
Expand Down

0 comments on commit 3ad5649

Please sign in to comment.