Skip to content

ThoroughImages/PathologyGo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PathologyGo

Core components of PathologyGo, the AI assistance system designed for histopathological inference.

Dependency

  • Docker
  • Python 2.7 and 3.x
  • openslide
  • tensorflow_serving
  • grpc
  • pillow
  • numpy
  • opencv-python

Dockerized TensorFlow Serving

Quick Start

This code is easy to implement. Just change the path to your data repo:

from utils import config
GPU_LIST = config.INFERENCE_GPUS
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join('{0}'.format(n) for n in GPU_LIST)
from inference import Inference


if __name__ == '__main__':
    pg = Inference(data_dir='/path/to/data/', data_list='/path/to/list',
                    class_num=2, result_dir='./result', use_level=1)
    pg.run()

You may configure all the model-specific parameters in utils/config.py.

Example

Use the CAMELYON16 test dataset as an example, the data path should be /data/CAMELYON/, and the content of the data list is

001.tif
002.tif
...

The predicted heatmaps will be written to ./result.

DIY Notes

You may use other exported models. You can change the model name for TensorFlow Serving in utils/config.py. Just remember to modify class_num and use_level.

Note that the default input / output tensor name should be input / output.

About

Core components of PathologyGo, the AI assistance system designed for histopathological inference.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages