Skip to content

Latest commit

 

History

History
190 lines (136 loc) · 5.54 KB

inference_api.md

File metadata and controls

190 lines (136 loc) · 5.54 KB

Inference API

Inference API is listening on port 8080 and only accessible from localhost by default. To change the default setting, see TorchServe Configuration.

The TorchServe server supports the following APIs:

API Description

To view a full list of inference APIs, you can use following command:

curl -X OPTIONS http://localhost:8080

The output is in the OpenAPI 3.0.1 json format. You can use it to generate client code, see swagger codegen for more details.

Health check API

This API follows the InferenceAPIsService.Ping gRPC API. It returns the status of a model in the ModelServer.

TorchServe supports a ping API that you can call to check the health status of a running TorchServe server:

curl http://localhost:8080/ping

If the server is running, the response is:

{
  "status": "Healthy"
}

Predictions API

This API follows the InferenceAPIsService.Predictions gRPC API. It returns the status of a model in the ModelServer.

To get predictions from the default version of each loaded model, make a REST call to /predictions/{model_name}:

  • POST /predictions/{model_name}

curl Example

curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/kitten_small.jpg

curl http://localhost:8080/predictions/resnet-18 -T kitten_small.jpg

or:

curl http://localhost:8080/predictions/resnet-18 -F "data=@kitten_small.jpg"

To get predictions from the loaded model which expects multiple inputs

curl http://localhost:8080/predictions/squeezenet1_1 -F 'data=@docs/images/dogs-before.jpg' -F 'data=@docs/images/kitten_small.jpg'

or:

import requests

res = requests.post("http://localhost:8080/predictions/squeezenet1_1", files={'data': open('docs/images/dogs-before.jpg', 'rb'), 'data': open('docs/images/kitten_small.jpg', 'rb')})

To get predictions from a specific version of each loaded model, make a REST call to /predictions/{model_name}/{version}:

  • POST /predictions/{model_name}/{version}

curl Example

curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/kitten_small.jpg

curl http://localhost:8080/predictions/resnet-18/2.0 -T kitten_small.jpg

or:

curl http://localhost:8080/predictions/resnet-18/2.0 -F "data=@kitten_small.jpg"

The result is JSON that tells you that the image is most likely a tabby cat. The highest prediction was:

{
    "class": "n02123045 tabby, tabby cat",
    "probability": 0.42514491081237793
}

Explanations API

Torchserve makes use of Captum's functionality to return the explanations of the models that is served.

To get explanations from the default version of each loaded model, make a REST call to /explanations/{model_name}:

  • POST /explanations/{model_name}

curl example

curl http://127.0.0.1:8080/explanations/mnist -T examples/image_classifier/mnist/test_data/0.png

The result is a json that gives you the explanations for the input image

  [
    [
      [
        [
          0.004570948731989492,
          0.006216969640322402,
          0.008197565423679522,
          0.009563574612830427,
          0.008999274832810742,
          0.009673474804303854,
          0.007599905146155397,
          ,
	        ,

        ]
      ]
    ]
  ]

KServe Inference API

Torchserve makes use of KServe Inference API to return the predictions of the models that is served.

To get predictions from the loaded model, make a REST call to /v1/models/{model_name}:predict:

  • POST /v1/models/{model_name}:predict

curl example

 curl -H "Content-Type: application/json" --data @kubernetes/kserve/kf_request_json/v1/mnist.json http://127.0.0.1:8080/v1/models/mnist:predict

The result is a json that gives you the predictions for the input json

{
  "predictions": [
    2
  ]
}

KServe Explanations API

Torchserve makes use of KServe API spec to return the explanations for the the models that it served.

To get explanations from the loaded model, make a REST call to /v1/models/{model_name}:explain:

  • /v1/models/{model_name}:explain

curl example

 curl -H "Content-Type: application/json" --data @kubernetes/kserve/kf_request_json/v1/mnist.json http://127.0.0.1:8080/v1/models/mnist:explain

The result is a json that gives you the explanations for the input json

{
  "explanations": [
    [
      [
        [
          0.004570948731989492,
          0.006216969640322402,
          0.008197565423679522,
          0.009563574612830427,
          0.008999274832810742,
          0.009673474804303854,
          0.007599905146155397,
          ,
          ,
	        ,           
        ]
      ]
    ]
  ]
}