Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify transform function to support batch inference #108

Merged
merged 12 commits into from
Sep 7, 2022

Conversation

nikhil-sk
Copy link
Contributor

Issue #, if available:
This PR fixes the issue described here in PT inference toolkit repo, since the fix can be applied at the transform function in the sagemaker inference toolkit, which PT toolkit inherits from.

Description of changes:

  1. This PR fixes the issue where the transform() function drops all but one requests when running prediction.
  2. This PR adds a transform() function to override the transform() function from the sagemaker-inference-toolkit. It loops through the input data and runs _transform_fn() on each input. Then, it appends the response to a list. When all inputs are processed, the list is returned.

Config used:

env_variables_dict = {
    "SAGEMAKER_TS_BATCH_SIZE": "3",
    "SAGEMAKER_TS_MAX_BATCH_DELAY": "10000",
    "SAGEMAKER_TS_MIN_WORKERS": "1",
    "SAGEMAKER_TS_MAX_WORKERS": "1",
}

Requests sent to SM endpoint as:

import multiprocessing


def invoke(endpoint_name):
    return predictor.predict(
        "{Bloomberg has decided to publish a new report on global economic situation.}"
    )


endpoint_name = predictor.endpoint_name
pool = multiprocessing.Pool(3)
results = pool.map(invoke, 5 * [endpoint_name])
pool.close()
pool.join()
print(results)

Logs:

lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,606 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1658385260606
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,608 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Backend received inference at: 1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,608 [WARN ] W-9000-model_1.0-stderr MODEL_LOG - Downloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 40.9kB/s]
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,609 [WARN ] W-9000-model_1.0-stderr MODEL_LOG - Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,829 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 223
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41768 "POST /invocations HTTP/1.1" 200 235
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41766 "POST /invocations HTTP/1.1" 200 237
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41772 "POST /invocations HTTP/1.1" 200 238
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:220.84|#ModelName:model,Level:Model|#hostname:4eaca41fef85,requestID:48456f5d-451c-4b5b-a377-b70c0a630510,b2f48fcb-5e16-47d2-a592-a08b03794a1e,0d9d4a84-bd30-409e-b6ab-abe5d975efe9,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,834 [INFO ] W-9000-model_1.0 TS_METRICS - WorkerThreadTime.ms:5|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,879 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1658385270879
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,880 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Backend received inference at: 1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,981 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,981 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,981 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 101
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41768 "POST /invocations HTTP/1.1" 200 10104
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:10000|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41766 "POST /invocations HTTP/1.1" 200 10105
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:10000|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 TS_METRICS - WorkerThreadTime.ms:3|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,984 [INFO ] W-9000-model_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:100.58|#ModelName:model,Level:Model|#hostname:4eaca41fef85,requestID:6a42623e-e66c-4681-a508-a297b519ee39,bbea0728-5968-43e7-abd4-cdbe9cc61455,timestamp:1658385270
[b'["Not Accepted"]', b'["Not Accepted"]', b'["Not Accepted"]', b'["Not Accepted"]', b'["Not Accepted"]']

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

  • I have read the CONTRIBUTING doc
  • I used the commit message format described in CONTRIBUTING
  • I have used the regional endpoint when creating S3 and/or STS clients (if appropriate)
  • I have updated any necessary documentation, including READMEs

Tests

  • I have added tests that prove my fix is effective or that my feature works (if appropriate)
  • I have checked that my tests are not configured for a specific region or account (if appropriate)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@nikhil-sk nikhil-sk changed the title Use an overriden transform function to support batch inference Modify transform function to support batch inference Jul 21, 2022
@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 2389f3c
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 0796a7b
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: c9193ee
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

waytrue17
waytrue17 previously approved these changes Aug 5, 2022
Copy link
Contributor

@waytrue17 waytrue17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we have a test to cover input batch > 1?

waytrue17
waytrue17 previously approved these changes Sep 6, 2022
Copy link
Contributor

@waytrue17 waytrue17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 3b35a9e
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 5d2d145
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 47f8106
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 67f57ef
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 57b4e23
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: fde4f84
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: ef4adf3
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 1c8dac6
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants