This project is no longer under active maintenance. It is read-only, but you can still clone or fork the repo. Check here for further info. Please contact innereye_info@service.microsoft.com if you run into trouble with the "Archived" state of the repo.
InnerEye-Inference is an App Service webapp in python to run inference on medical imaging models trained with the InnerEye-DeepLearning toolkit.
You can also integrate this with DICOM using the InnerEye-Gateway.
If developing or using this tool locally, we highly recommend using Ubuntu 20.04 as your operating system. This is as the Azure App Service base image will be Ubuntu. By developing locally in Ubuntu you can guarantee maximum repeatibility between local and cloud behaviour.
For windows users this is easily done through Windows Subsystem for Linux.
Download a Conda or Miniconda installer for your platform and run it.
Note that in order to create the Conda environment you will need to have build tools installed on your machine. If you are running Windows, they should be already installed with Conda distribution.
You can install build tools on Ubuntu (and Debian-based distributions) by running:
sudo apt-get install build-essential
If you are running CentOS/RHEL distributions, you can install the build tools by running:
yum install gcc gcc-c++ kernel-devel make
Start the conda
prompt for your platform. In that prompt, navigate to your repository root and run:
conda env create --file environment.yml
conda activate inference
Add this script with name set_environment.sh to set your env variables. This can be executed in Linux. The code will read the file if the environment variables are not present.
#!/bin/bash
export CUSTOMCONNSTR_AZUREML_SERVICE_PRINCIPAL_SECRET=
export CUSTOMCONNSTR_API_AUTH_SECRET=
export CLUSTER=
export WORKSPACE_NAME=
export EXPERIMENT_NAME=
export RESOURCE_GROUP=
export SUBSCRIPTION_ID=
export APPLICATION_ID=
export TENANT_ID=
export DATASTORE_NAME=
export IMAGE_DATA_FOLDER=
Run with source set_environment.sh
flask run
to test it locally
The app can be tested locally using curl
.
To check that the server is running, issue this command from a local shell:
curl -i -H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" http://localhost:5000/v1/ping
This should produce an output similar to:
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 11:50:20 GMT
To test DICOM image segmentation of a file, first create Tests/TestData/HN.zip
containing a zipped set of the test DICOM files in Tests/TestData/HN
. Then assuming there is a model PassThroughModel:4
, issue this command:
curl -i \
-X POST \
-H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" \
--data-binary @Tests/TestData/HN.zip \
http://localhost:5000/v1/model/start/PassThroughModel:4
This should produce an output similar to:
HTTP/1.0 201 CREATED
Content-Type: text/plain
Content-Length: 33
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 13:00:13 GMT
api_inference_1629291609_fb5dfdf9
here api_inference_1629291609_fb5dfdf9
is the run id for the newly submitted inference job.
To monitor the progress of the previously submitted inference job, issue this command:
curl -i \
-H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" \
--head \
http://localhost:5000/v1/model/results/api_inference_1629291609_fb5dfdf9 \
--next \
-H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" \
--output "HN_rt.zip" \
http://localhost:5000/v1/model/results/api_inference_1629291609_fb5dfdf9
If the run is still in progress then this should produce output similar to:
HTTP/1.0 202 ACCEPTED
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 13:45:20 GMT
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
If the run is complete then this should produce an output similar to:
HTTP/1.0 200 OK
Content-Type: application/zip
Content-Length: 131202
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 14:01:27 GMT
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 128k 100 128k 0 0 150k 0 --:--:-- --:--:-- --:--:-- 150k
and download the inference result as a zipped DICOM-RT file to HN_rt.zip
.
- Install Azure CLI:
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
- Login:
az login --use-device-code
- Deploy:
az webapp up --sku S1 --name test-python12345 --subscription <your_subscription_name> -g InnerEyeInference --location <your region> --runtime PYTHON:3.7
- In the Azure portal go to Monitoring > Log Stream for debugging logs
If you would like to reproduce the automatic deployment of the service for testing purposes:
az ad sp create-for-rbac --name "<name>" --role contributor --scope /subscriptions/<subs>/resourceGroups/InnerEyeInference --sdk-auth
- The previous command will return a json object with the content for the variable
secrets.AZURE_CREDENTIALS
.github/workflows/deploy.yml
If you would like to deploy your Azure App Service behind a Web Application Firewall (WAF) then please see this documentation.
During inference the image data zip file is copied to the IMAGE_DATA_FOLDER in the AzureML workspace's DATASTORE_NAME datastore. At the end of inference the copied image data zip file is overwritten with a simple line of text. At present we cannot delete these. If you would like these overwritten files removed from your datastore you can add a policy to delete items from the datastore after a period of time. We recommend 7 days.
The Azure App Service will use the packages specified in requirements.txt
to create the python virtual environment in which the flask app is run. The environment.yml
is used for local environments only. Therefore if you want to change the packages your app service has access to, you must update requirements.txt
.
You are responsible for the performance and any necessary testing or regulatory clearances for any models generated
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit the Microsoft CLA site.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
The InnerEye-DeepLearning toolkit, InnerEye-Gateway and InnerEye-Inference (collectively the “Research Tools”) are provided AS-IS for use by third parties for the purposes of research, experimental design and testing of machine learning models. The Research Tools are not intended or made available for clinical use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions. The Research Tools are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used as such. All users are responsible for reviewing the output of the developed model to determine whether the model meets the user’s needs and for validating and evaluating the model before any clinical use. Microsoft does not warrant that the Research Tools or any materials provided in connection therewith will be sufficient for any medical purposes or meet the health or medical requirements of any person.
This project has adopted the Microsoft Open Source Code of Conduct.
- Microsoft Open Source Code of Conduct
- Microsoft Code of Conduct FAQ
- Contact opencode@microsoft.com with questions or concerns