Tested with Python 3.8 and Chrome
And the following additional python package:
- torchvision 10.Flask_Restless
You can likely install the python requirements using something like (note python 3+ requirement):
pip3 install -r requirements.txt
The library versions have been pegged to the current validated ones. Later versions are likely to work but may not allow for cross-site/version reproducibility
We received some feedback that users could installed torch. Here, we provide a detailed guide to install Torch
The general guides for installing Pytorch can be summarized as following:
- Check your NVIDIA GPU Compute Capability @ https://developer.nvidia.com/cuda-gpus
- Download CUDA Toolkit @ https://developer.nvidia.com/cuda-downloads
- Install PyTorch command can be found @ https://pytorch.org/get-started/locally/
By default, it will start up on localhost:5555
Warning: virtualenv will not work with paths that have spaces in them, so make sure the entire path to
env/ is free of spaces.
There are many modular functions in QA whose behaviors could be adjusted by hyper-parameters. These hyper-parameters can be set in the config.ini file
- image name eg : train_1.png
- mask image name eg : train_1_mask.png
- csv file name eg: train_1.csv
Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files.
In order to use Docker version of QA, user needs:
- Nvidia driver supporting cuda. See documentation, here.
- Docker Engine. See documentation, here
- Nvidia-docker https://github.com/NVIDIA/nvidia-docker
Depending on your cuda version, we provide Dockerfiles for cuda_10 and cuda_11.
To start the server, run either:
docker build -t patchsorter -f cuda_10/Dockerfile .
docker build -t patchsorter -f cuda_11/Dockerfile .
from the PatchSorter folder.
When the docker image is done building, it can be run by typing:
docker run --gpus all -v /data/$CaseID/PatchSorter:/opt/PatchSorter -v /data/$CaseID/<location_of_images>/:/opt/imagedata -p 5555:5555 --shm-size=8G patchsorter
In the above command,
-v /data/$CaseID/PatchSorter:/opt/PatchSorter mounts the PS on host file system to the PS inside the container.
/data/$CaseID/PatchSorter should be the PS path on your host file system,
/opt/PatchSorter is the PS path inside the container, which is specified in the Dockerfile.
If image files will be uploaded using the upload folder option image directory needs to be mounted as well.
/data/$CaseID/<location_of_images>/ would be the path for images on your host file system,
/opt/imagedata will be the path for the images inside the container.
Note: This command will forward port 5555 from the computer to port 5555 of the container, where our flask server is running as specified in the [config.ini]. The port number should match the config of running PS on host file system.
Please use below to cite this paper if you find this repository useful or if you use the software shared here in your research.
Talawalla T., Toth R., Walker C., Horlings H., Rea K., Rottenberg S., Madabhushi A., Janowczyk A., "PatchSorter a high throughput open-source digital pathology tool for histologic object labeling", European Society of Digital and Integrative Pathology (ESDIP), Germany, 2022