Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error while invoking sagemaker endpoint #245

Closed
Harathi123 opened this issue May 8, 2018 · 14 comments
Closed

Getting error while invoking sagemaker endpoint #245

Harathi123 opened this issue May 8, 2018 · 14 comments

Comments

@Harathi123
Copy link

I created training job in sagemaker with my own training and inference code using MXNet framework. I am able to train the model successfully and created endpoint as well. But while inferring the model, I am getting the following error:
‘ClientError: An error occurred (413) when calling the InvokeEndpoint operation: HTTP content length exceeded 5246976 bytes.’
What I understood from my research is the error is due to the size of the image. The image shape is (480, 512, 3). I trained the model with images of same shape (480, 512, 3).

When I resized the image to (240, 256), the error was gone. But producing another error 'shape inconsistent in convolution' as I the trained the model with images of size (480, 512).

I didn’t understand why I am getting this error while inferring.
Can't we use images of larger size to infer the model?
Any suggestions will be helpful

Thanks, Harathi

@Harathi123 Harathi123 changed the title Getting error while infering sagemaker endpoint Getting error while invoking sagemaker endpoint May 8, 2018
@djarpin
Copy link
Contributor

djarpin commented May 8, 2018

Thanks @Harathi123 . Payloads for SageMaker invoke endpoint requests are limited to about 5MB. So if you're storing the pixel values as 8 byte floats, then 480 * 512 * 3 * 8 will be larger than this 5MB payload limit.

One option for doing inference on larger images might be to pass in an S3 path in your invoke endpoint request and then write your scoring logic to copy the image stored at that S3 path before doing inference.

There may be other ways to get around this, like compressing the image before sending and then decompressing within the container before inference, but these may be very use case specific.

@Harathi123
Copy link
Author

Harathi123 commented May 8, 2018

Hi @djarpin, thanks for suggestions.
This is my transform function:

def transform_fn(net, data, input_content_type, output_content_type):
    image = json.loads(data)
    nda = nd.array(image)
    prediction = net(nda)
    response_body = json.dumps(decode(prediction.asnumpy()))
    return response_body, output_content_type

This is how i am invoking the endpoint. I am passing numpy array of image.

   img = cv2.imread('image.png')
   img = img.reshape((1, 3, 480, 512))
   img = img.astype('float32')/ 255
   pred = predictor.predict(img)

Can I pass in an S3 path to invoke endpoint request like this?

    pred = predictor.predict(' .....S3 path......')

Thanks,
Harathi

@andremoeller
Copy link
Contributor

Hi @Harathi123 ,

You could possibly pass in a dictionary, like

{ 's3_path' : 's3://my-bucket/my-key' }

And then, in your transform function, retrieve the value of s3_path, download that file from S3, and predict on it.

But it seems to me like the image you're invoking with should be small enough since you're using float32 dtype now. Could you tell us what the value of img.nbytes is before predicting with img, and if InvokeEndpoint still says your payload is too large, could you post the stacktrace?

Thanks!

@austinmw
Copy link
Contributor

austinmw commented Jan 18, 2019

Hi @djarpin, I could really use your help if possible. Is this 5MB a hard limit that is unaffected by how I change nginx.conf client_max_body_size? What is the limit exactly and where can I find more information about this? Is there any way to increase the limit? It seems very low and is causing a lot of pain and frustration in integrating the endpoint into a production pipeline. My team is currently evaluating these endpoints and this issue is a big one for us.

@djarpin
Copy link
Contributor

djarpin commented Jan 19, 2019

Hi @austinmw ,
Yes, the 5MB is a hard limit imposed by the SageMaker platform as documented here.

Typically exceeding the 5MB limit is cause by:

  1. Sending too many small records in a single request to a live endpoint. In which case, batch transform could be used instead.
  2. Having very large single records (e.g. videos or high resolution images). In which case, storing the file in S3, sending the S3 path, and having the container pick up the S3 object based on the path is a common workaround.

Thanks.

@austinmw
Copy link
Contributor

austinmw commented Jan 19, 2019

@djarpin Thanks for your reply. I have a lot of high-res images to process and pulling them from S3 seems very inefficient. Especially if they aren't originally coming from S3 and I have to both upload/download each. How do people typically handle large images in SageMaker?

@austinmw
Copy link
Contributor

@djarpin Hi, also after testing, I believe the max payload size is 5 MiB not 5 MB.

@dorg-jmiller
Copy link

If you're using an nginx server as part of your custom Docker image, you may need to change the value of client_max_body_size within your ngix.conf file.

I set client_max_body_size to 0, which allows for an unlimited body size.

@austinmw
Copy link
Contributor

austinmw commented Jan 23, 2019

@dorg-jmiller I tried that, but was still running into the 5MiB limit. Have you been able to send a large payload (for ex. 10 MB) by modifying client_max_body_size? AWS phone support told me that 5 MiB was a hard limit regardless, but maybe they were wrong.

Modifying my SavedModel to accept json serialized base64 encoded strings helped to reduce the size of tensors I'm sending significantly though, so this 5 MiB limit is now not as big of an issue (although still a bit of a pain). Without doing so I hit the limit with tensors greater than (5,128,128,3), now I can send up to about (2500,128,128,3).

@dorg-jmiller
Copy link

Ah sorry, I missed above that you had already modified this limit in nginx.conf. I'm working with text and not images, so I was only running into the size limit when SageMaker would send data in 6 MB batches (the default).

Sorry again if I'm missing what was discussed above, but is the MaxPayloadInMB parameter when creating a batch transform job not what you want?

@austinmw
Copy link
Contributor

austinmw commented Jan 23, 2019

@dorg-jmiller I think the 5 MiB mentioned doesn't affect batch transform jobs, but only live http endpoints. I should probably experiment with ways to take advantage of BT jobs more often, but currently I need realtime inference from stood up endpoints.

going from the json serialized list of Numpy arrays to the json serialized base64 encoded strings helped a lot. Now I'd like to try and switch from RESTful TF Serving to gRPC so I don't need to json serialize at all. Hopefully not too big of a pain to figure out.

@dorg-jmiller
Copy link

Gotcha, that makes sense. From the little bit I know, batch transform won't suffice when you need real time inference.

@tf401
Copy link

tf401 commented Jun 5, 2019

@austinmw

I've run in into the same problem as you, having numpy array of (3, 218, 525, 3) reaches the limit with my current serialization.

I'm really keen to know more in detail how you serialized your data
my best try so far is (frames are np array with the shape above)

import json
import base64

b = base64.b64encode(frames).decode('utf-8')
r = json.dumps([str(frames.dtype),b , frames.shape])

but its no way near your results

Thanks!

@SaschaHeyer
Copy link

A more up-to-date answer:
Use AWS SageMaker Async Inference
https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html

Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. This option is ideal for requests with large payload sizes (up to 1GB), long processing times (up to 15 minutes), and near real-time latency requirements. Asynchronous Inference enables you to save on costs by autoscaling the instance count to zero when there are no requests to process, so you only pay when your endpoint is processing requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants