From 7fa2c2e5cfbbb2aaff4dc2623173ccb99ccc3ad2 Mon Sep 17 00:00:00 2001 From: Kanto <30429223+yonghyeokrhee@users.noreply.github.com> Date: Sun, 30 Aug 2020 00:26:36 +0900 Subject: [PATCH] Update README.md pytorch default_inference_handler address changed from github --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4eb142d..db34fef 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ RUN pip3 install multi-model-server sagemaker-inference To use the SageMaker Inference Toolkit, you need to do the following: 1. Implement an inference handler, which is responsible for loading the model and providing input, predict, and output functions. - ([Here is an example](https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/default_inference_handler.py) of an inference handler.) + ([Here is an example](https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/default_pytorch_inference_handler.py) of an inference handler.) ``` python from sagemaker_inference import content_types, decoder, default_inference_handler, encoder, errors