Skip to content

dkurt/qupath-extension-openvino

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QuPath OpenVINO extension

GitHub all releases Image.sc forum Awesome OpenVINO

Welcome to the Intel OpenVINO extension for QuPath!

This adds support for inference optimization using Intel OpenVINO for Java into QuPath.

Intel(R) Core(TM) i7-6700K Test image: OS-3.ndpi
Model: he_heavy_augment
Tile size: 1024x1024
TensorFlow 2.4.1 with platform=mkl 22:31 minutes
OpenVINO 2022.1 15:02 minutes (x1.48)
OpenVINO 2022.1 INT8 9:54 minutes (x2.33)

Building

You can always build this extension from source but you can also download pre-built package from releases page. Choose one for your operating system.

Extension + dependencies separately

You can build the extension with

gradlew clean build copyDependencies

The output will be under build/libs.

  • clean removes anything old
  • build builds the QuPath extension as a .jar file and adds it to libs
  • copyDependencies copies the TensorFlow dependencies to the libs folder

Extension + dependencies together

Alternatively, you can create a single .jar file that contains both the extension and all its dependencies with

gradlew clean shadowjar

Installing

The extension + its dependencies will all need to be available to QuPath inside QuPath's extensions folder.

The easiest way to install the jars is to simply drag them on top of QuPath when it's running. You will then be prompted to ask whether you want to copy them to the appropriate folder.

Usage

OpenVINO IR format

OpenVINO uses own format for the deep learning networks representation (IR). It is a pair of .xml and .bin files which generated from original model. Download ready to use models from models directory. There are FP32 and INT8 (quantized) version of the models. INT8 is faster for most of CPUs.

Alternatively, you can convert model locally. For model conversion you can install openvino-dev Python package and use Model Optimizer by mo command.

Example conversion for StarDist model (we recommend to use Python virtual environment to install required packages):

python3 -m venv venv3
source venv3/bin/activate
pip install --upgrade pip
pip install openvino-dev tensorflow

mo --input input --data_type FP16 --input_shape "[1,1024,1024,3]" --saved_model_dir=he_heavy_augment

Note that extension is able to reshape model to any input size in runtime so "[1,1024,1024,3]" is just a default input resolution. For dsb2018_heavy_augment number of channels equals 1 so use --input_shape "[1,1024,1024,1]"

🤔 For questions and feature requests use issues or forum.