-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support workflow in benchmark-ab.py #1445
Conversation
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
benchmarks/benchmark-ab.py
Outdated
ab_cmd = f"ab -c {execution_params['concurrency']} -n {execution_params['requests']/10} -k -p {TMP_DIR}/benchmark/input -T " \ | ||
f"{execution_params['content_type']} {execution_params['inference_url']}/{execution_params['inference_model_url']} > {result_file}" | ||
|
||
execute(ab_cmd, wait=True) | ||
|
||
|
||
def run_benchmark(): | ||
if execution_params['url'].endswith('.war'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extract this into a method is_workflow_run
or something similar, so that in the future if we decide to change the extension for workflow artifacts we won't have to change all if-else statements
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Description
Please include a summary of the feature or issue being fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes #(issue)
#1443
Type of change
Please delete options that are not relevant.
Feature/Issue validation/testing
Please describe the tests [UT/IT] that you ran to verify your changes and relevent result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.
Test A
Test B
UT/IT execution results
Logs
python ./benchmarks/benchmark-ab.py --config_properties ./benchmarks/config.p
Configured execution parameters are:
{'url': 'https://torchserve.pytorch.org/war_files/dog_breed_wf.war', 'gpus': '', 'exec_env': 'local', 'batch_size': 1, 'batch_delay': 100, 'workers': 2, 'concurrency': 10, 'requests': 1000, 'input': './docs/images/kitten_small.jpg', 'content_type': 'application/jpg', 'image': '', 'docker_runtime': '', 'backend_profiling': False, 'config_properties': './benchmarks/config.properties', 'inference_model_url': 'predictions/benchmark', 'report_location': '/tmp', 'inference_url': 'http://0.0.0.0:8080', 'management_url': 'http://0.0.0.0:8081', 'config_properties_name': 'config.properties'}
Preparing local execution...
*Terminating any existing Torchserve instance ...
torchserve --stop
TorchServe has stopped.
*Setting up model store...
*Starting local Torchserve instance...
torchserve --start --model-store /tmp/model_store --workflow-store /tmp/wf_store --ts-config /tmp/benchmark/conf/config.properties > /tmp/benchmark/logs/model_metrics.log
*Testing system health...
{
"status": "Healthy"
}
*Registering model...
{
"status": "Workflow benchmark has been registered and scaled successfully."
}
Executing warm-up ...
ab -c 10 -n 100.0 -k -p /tmp/benchmark/input -T application/jpg http://0.0.0.0:8080/wfpredict/benchmark > /tmp/benchmark/result.txt
Executing inference performance tests ...
ab -c 10 -n 1000 -k -p /tmp/benchmark/input -T application/jpg http://0.0.0.0:8080/wfpredict/benchmark > /tmp/benchmark/result.txt
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Checklist: