Skip to content

Commit

Permalink
Corrected typos (#72)
Browse files Browse the repository at this point in the history
* Corrected typos

* Corrected link
  • Loading branch information
Rylern committed May 31, 2023
1 parent c8fd9ef commit 637c39e
Show file tree
Hide file tree
Showing 7 changed files with 10 additions and 10 deletions.
4 changes: 2 additions & 2 deletions docs/deep/bioimage.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ It also provides test inputs and outputs, so it's possible to check the results
QuPath aims to support the zoo via the [QuPath Bioimage Model Zoo extension](https://github.com/qupath/qupath-extension-bioimageio).

The overall aim is to enable models kept in the Zoo to be imported into some QuPath-friendly form.
Currently, the zoo contains a lot of models devoted to image segementation - so the extension focusses on converting these models to QuPath pixel classifiers.
Currently, the zoo contains a lot of models devoted to image segmentation - so the extension focusses on converting these models to QuPath pixel classifiers.


:::{admonition} Adding Deep Java Library
Expand Down Expand Up @@ -126,4 +126,4 @@ QuPath currently aims to support:
* TensorFlow saved model bundles (you'll need to unzip the bundle)... assuming you're not using Apple silicon
* PyTorch *using Torchscript only*
* ONNX *might* work via QuPath's built-in OpenCV (if you're very lucky), or if you [build QuPath from source](building) adding the OnnxRuntime engine to DJL
:::
:::
4 changes: 2 additions & 2 deletions docs/deep/djl.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ If the download is successful, the indicator beside the engine should switch to

:::{admonition} Why an extension?
The *QuPath Deep Java Library extension* is at an early stage and under active development.
Keeping it as a separate extension allows us to make updates ithout needing to make an entirely new QuPath release.
Keeping it as a separate extension allows us to make updates without needing to make an entirely new QuPath release.

In the future, it might well become included in QuPath by default.

Expand Down Expand Up @@ -337,4 +337,4 @@ This only begins to scratch the surface of possibilities for deep learning suppo
Because Groovy gives access to all of QuPath and all of DJL, a lot more can already be done by scripting - including loading, and even training, your own models.
Check out the DJL documentation for more details.

Over time, the QuPath extension and docs will be updated as we make deep learning easier to use without needing to grapple with DJL directly.
Over time, the QuPath extension and docs will be updated as we make deep learning easier to use without needing to grapple with DJL directly.
4 changes: 2 additions & 2 deletions docs/reference/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ There is some info about adding GPU support for specific cases in {ref}`building
However, note that many bottlenecks depend upon things that cannot be solved by the GPU alone (e.g. reading image tiles, the user interface thread).
Therefore the real-world impact on performance may be quite modest for many applications.

The interactive machine learning uses OpenCV as the processing library, which uses the CPU (but highly-optimzed).
The interactive machine learning uses OpenCV as the processing library, which uses the CPU (but highly-optimized).
It is designed so that other machine learning libraries could potentially be used, if suitable extensions are written.

### Why do I see a warning when I try to install QuPath?
Expand Down Expand Up @@ -187,7 +187,7 @@ See {ref}`Open URI` for more details.

### Why does my image open but look weird?

See [Why can't QuPath open my image?]
See {ref}`Why can't QuPath open my image?`

### Is it possible to view slide labels?

Expand Down
2 changes: 1 addition & 1 deletion docs/scripting/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ You can find QuPath's API docs at <http://qupath.github.io/javadoc/docs/>

## Default imports

In the *Script Editor*, there is an option {menuselection}`Run --> Include default bindings`.
In the *Script Editor*, there is an option {menuselection}`Run --> Include default imports`.

If this is selected, QuPath will add the following line to the top of your script:

Expand Down
2 changes: 1 addition & 1 deletion docs/starting/first_steps.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ For more information on the strange use of the word *descendant*, see {doc}`../c
Next, try creating detection objects inside an annotation.
First, draw an annotation in an area of the image containing cells - ideally quite small, to contain perhaps 100 cells.

Run the {menuselection}`Analyze --> Cell analysis --> Cell detection` command.
Run the {menuselection}`Analyze --> Cell detection --> Cell detection` command.
This should bring up an intimidating list of parameters to adapt the detection to different images.
If you like you can explore these, and hover the mouse over each parameter for a description - but for now, you can also just ignore them and use the defaults (which tend to behave sensibly across a range of images).

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/cell_detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Ki67 image with annotation

### Run *Positive cell detection*

Run the {menuselection}`Analyze --> Cell analysis --> Positive cell detection` command.
Run the {menuselection}`Analyze --> Cell detection --> Positive cell detection` command.
This will bring up a dialog, where most of the options relate to how the cells are detected.
The default values are often good enough to get started.

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/separating_stains.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Brightness/Contrast tool and channel viewer with a multiplexed image.
:::

:::{tip}
{menuselection}`View --> Mini viewers --> Channel viewer` can be used to visualize all separated channels simultaneously.
{menuselection}`View --> Show channel viewer` can be used to visualize all separated channels simultaneously.
:::

:::{tip}
Expand Down

0 comments on commit 637c39e

Please sign in to comment.