Skip to content

Commit

Permalink
Merge pull request #85 from qupath/0.4
Browse files Browse the repository at this point in the history
Sync to v0.4
  • Loading branch information
petebankhead committed Nov 8, 2023
2 parents 2067eea + 2723d8c commit dc5f7b4
Show file tree
Hide file tree
Showing 4 changed files with 52 additions and 5 deletions.
6 changes: 5 additions & 1 deletion .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,16 @@
# Required
version: 2

build:
os: ubuntu-22.04
tools:
python: "3.10"

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: conf.py

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: requirements.txt
17 changes: 16 additions & 1 deletion docs/deep/djl.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,22 @@ Instead, DJL can download them when they are needed and store them locally on yo
This *can* happen automatically, but QuPath tells DJL not to do that since downloading large files unexpectedly could be troublesome for some users.
Instead, you should use the {menuselection}`Manage DJL Engines` to explicitly request the download.

If the download is successful, the indicator beside the engine should switch to green.
(deep-java-library-gpu)=
:::{admonition} GPU support
:class: tip

To use an NVIDIA GPU with either TensorFlow or Pytorch, you will need to have a *compatible* version of CUDA installed *before* downloading the engine.

'Compatible' here depends upon some other versions.

QuPath v0.4.4 uses Deep Java Library 0.20.0, which by default uses
* [PyTorch 1.13.0](https://docs.djl.ai/engines/pytorch/pytorch-engine/index.html#supported-pytorch-versions), which requires [CUDA 11.6 or 11.7](https://pytorch.org/get-started/previous-versions/#v1130)
* [TensorFlow 2.7.4](https://github.com/deepjavalibrary/djl/releases/tag/v0.20.0), which requires [CUDA 11.2](https://www.tensorflow.org/install/source#gpu).

The fact that PyTorch and TensorFlow require different CUDA versions is... not helpful. So you may be able to get GPU support for only one.
:::

If downloading the engine is successful, the indicator beside the engine should switch to green.

:::{admonition} Why an extension?
The *QuPath Deep Java Library extension* is at an early stage and under active development.
Expand Down
33 changes: 30 additions & 3 deletions docs/deep/wsinfer.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,16 +98,43 @@ The options available will depend upon your computer's capabilities (at least as

* **CPU**: This is generally the safest - and slowest - option, because it should be supported on all computers.
* **MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Apple Silicon - it is the Mac version of GPU acceleration
* **GPU**: This should appear if you have an NVIDIA GPU, CUDA... and some luck.
* **GPU**: This should appear if you have an NVIDIA GPU, CUDA... and a little bit of luck.

If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*.
However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer.

QuPath v0.4.x uses PyTorch 1.13.x by default, which is expected to work with CUDA 11.6 or 11.7.
For more info, see [the Deep Java Library page](deep-java-library-gpu).

:::{admonition} PyTorch & CUDA versions
:class: tip

The WSInfer extension is using Deep Java Library to manage its PyTorch installation.
It won't automatically find any existing PyTorch you might have installed: Deep Java Library will download its own.

If you have a compatible GPU, and want CUDA support, you'll need to ensure you have an appropriate CUDA installed *before* PyTorch is downloaded.
QuPath v0.4.x uses PyTorch 1.13.x by default, which is expected to work with CUDA 11.6 or 11.7.
:::
:::


## Scripting

The QuPath WSInfer extension is scriptable, which makes it much easier to apply across multiple images.

When a model is run, the command parameters are stored in the [workflow](workflows) so that a [script can be generated automatically](workflows-to-scripts).

An example script would be

```groovy
selectAnnotations()
qupath.ext.wsinfer.WSInfer.runInference("kaczmarj/pancancer-lymphocytes-inceptionv4.tcga")
```

where the `selectAnnotation()` line was added when I pressed the {guilabel}`Annotations` button in the WSInfer dialog, and the following line runs the specified models (creating tiles automatically).

To process in batch, I would need to

* Add my images to a QuPath project
* Annotate the regions of interest in the images (and save the data)
* Open the above script in QuPath's script editor
* Choose {menuselection}`Run --> Run for project`, and select the images I want to process

1 change: 1 addition & 0 deletions docs/scripting/workflows_to_scripts.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
(workflows-to-scripts)=
# Workflows to scripts

Being able to log commands in the form of a {doc}`workflow based upon the Command history <workflows>` is a reasonable first step towards achieving reproducibility in analysis, since at least it records what has been done to an image.
Expand Down

0 comments on commit dc5f7b4

Please sign in to comment.