Skip to content

Commit

Permalink
Testing adlik performance
Browse files Browse the repository at this point in the history
Closes #79
Signed-off-by: zhangkaili <zhang.kaili@zte.com.cn>
  • Loading branch information
KellyZhang2020 committed Jun 11, 2020
1 parent 0b31b33 commit 8574df7
Show file tree
Hide file tree
Showing 46 changed files with 2,041 additions and 0 deletions.
1 change: 1 addition & 0 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,4 @@ jobs:
- template: ci/azure-pipelines/jobs/markdownlint.yml
- template: ci/azure-pipelines/jobs/tox-model-compiler.yml
- template: ci/azure-pipelines/jobs/tox-model-compiler-2.yml
- template: ci/azure-pipelines/jobs/tox-benchmark.yml
2 changes: 2 additions & 0 deletions benchmark/.flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[flake8]
max-line-length = 120
21 changes: 21 additions & 0 deletions benchmark/.pylintrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[MASTER]
jobs=0

[MESSAGES CONTROL]
disable = fixme,
no-else-return,
too-many-arguments,
too-few-public-methods,
too-many-locals,
too-many-instance-attributes,
no-member,
unnecessary-pass

[FORMAT]
max-line-length = 120

[BASIC]
good-names = i,
j,
k,
o
66 changes: 66 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# About the benchmark

The benchmark is used to test the adlik serving performance of different models.

## Test the runtime performance

The parameters of the automatic test framework are as follows:

|abbreviation| detail | type | help |default|
|----------- |--------------------|------ |---------------------------------------------- |-------|
|-d |--docker-file-path | str | The docker file path of the test serving type | |
|-s |--serving-type | str | The test serving type | |
|-b |--build-directory | str | The directory which to build the docker | |
|-a |--adlik-directory | str | The adlik directory |Adlik |
|-m |--model-name | str | The path of model used for test | |
|-c |--client-script | str | The script used to infer |client_script.sh|
|-ss |--serving-script | str | The serving script |serving_script.sh|
|-ov |--openvino-version | str | The version of the OpenVINO |2019.3.344|
|-tt |--tensorrt-tar | str | The tar version of the TensorRT |TensorRT-7.0.0.11.Ubuntu-18.04.x86_64-gnu.cuda-10.0.cudnn7.6.tar.gz|
|-tv |--tensorrt-version | str | The version of TensorRT |7.0.0.11|
|-l |--log-path | str | The path of log directory |log |
|-tm |--test-model-path | str | The path of test model | |
|-sj |--serving-json | str | The json of model |serving_model.json|
|-cis |--client-inference-script|str|The inference script | |
|-i |--image-filename | str | Input image | |
|-gl |--gpu-label | int | The GPU label | None |
|-cs |--compile-script | str | Compile the model script |compile_script.sh|

If you want to use the automatic_test.py test the runtime, you need to follow the steps below:

1. Download Adlik code.
2. Install docker.
3. Prepare the serving_model.json (required for compile model) trained model, the format of the model file can be:.pb,
.h5, .ckpt,
.onnx, savedModel, and it is recommended to put the model and serving_model.json under the
Adlik/benchmark/test/test_model directory.
4. The writing of serving_model.json can refer to the serving_model.json of each model in
Adlki/benchmark/tests/test_model and Adlik/model_compiler/src/model_compiler/config_schema.json.
5. If there is no required inference code in the Adlik/benchmark/test/client directory of the benchmark, you need to
write the inference code
6. Specify the type of test runtime and version number (if needed, eg: OpenVINO and TensorRT).
7. Explicitly test whether a GPU is required.
8. The environment running the code has python3.7 or above installed.
9. According to the runtime type of the test, select the dockerfile, serving script and compile script required by the
test under the Adlik/benchmark/test directory
10. Configure parameters for testing, for example, run the follow command in the Adlik directory:

```sh

python3 benchmark/src/automatic_test.py -d benchmark/test/docker_test/openvino.Dockerfile -s openvino -b . -a . -m mnist
-c benchmark/test/client_script/client_script.sh -ss benchmark/test/serving_script/openvino_serving_script.sh -l
abspath(log) -tm benchmark/test/test_model/mnist_keras -cis mnist_client.py -i mnist.png -cs
benchmark/test/compile_script/compile_script.sh
```

## NOTE

1. If the test tensorrt is running, you need to register and download the dependency package from
[TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html). The downloaded dependency
package is recommended to be placed in the Adlik directory.
2. If your local environment has no way to connect to the external network, you need to configure the apt source and pip
source that can be used, and add the configuration command to the Dockerfile.
3. When bazel build, if you can't pull the package, you can also download the required packages in advance, and use the
--distdir command.
4. To prevent the computer from occupying too many cores when bazel build, causing jams. During bazel build, you can
also use --jobs to set concurrent jobs.
4 changes: 4 additions & 0 deletions benchmark/bandit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include:
- '*.py'

skips: [B404,B603,B101,B110]
48 changes: 48 additions & 0 deletions benchmark/setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#!/usr/bin/env python3

# Copyright 2019 ZTE corporation. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

"""
Benchmark test.
"""

from setuptools import find_packages, setup

_VERSION = '0.0.0'

_REQUIRED_PACKAGES = [
'keras==2.2.4',
'onnx==1.5.0',
'protobuf==3.6.1',
'torch==1.3.0',
'torchvision==0.4.0',
'requests',
'tensorflow==1.14.0',
'jsonschema==3.1.1',
'networkx==2.3',
'defusedxml==0.5.0'
]

_TEST_REQUIRES = [
'bandit==1.6.0',
'flake8==3.7.7',
'pylint==2.3.1',
'pytest-cov',
'pytest-xdist'
]

setup(
name="benchmark",
version=_VERSION.replace('-', ''),
author='ZTE',
author_email='ai@zte.com.cn',
packages=find_packages('src'),
package_dir={'': 'src'},
description=__doc__,
license='Apache 2.0',
keywords='Test serving-lite performance',
install_requires=_REQUIRED_PACKAGES,
extras_require={'test': _TEST_REQUIRES}

)
100 changes: 100 additions & 0 deletions benchmark/src/automatic_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# Copyright 2019 ZTE corporation. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

"""
Automated test runtime performance.
"""

import subprocess
import argparse
import os


def _parse_arguments():
args_parser = argparse.ArgumentParser()
args_parser.add_argument("-d", "--docker-file-path", type=str, required=True,
help="The docker file path of the test serving type")
args_parser.add_argument("-s", "--serving-type", type=str, required=True, help="The test serving type",
choices=("openvino", "tensorrt", "tensorflow", "tensorflow_gpu"))
args_parser.add_argument("-b", "--build-directory", type=str, required=True,
help="The directory which to build the docker")
args_parser.add_argument("-a", "--adlik-directory", type=str, default="Adlik", help="The adlik directory")
args_parser.add_argument("-m", "--model-name", type=str, required=True, help="The path of model used for test")
args_parser.add_argument("-c", "--client-script", type=str, default="client_script.sh",
help="The script used to infer")
args_parser.add_argument("-ss", "--serving-script", type=str, default="serving_script.sh",
help="The serving script")
args_parser.add_argument("-ov", "--openvino-version", type=str, default="2019.3.344",
help="The version of the OpenVINO")
args_parser.add_argument("-tt", "--tensorrt-tar", type=str,
default="TensorRT-7.0.0.11.Ubuntu-18.04.x86_64-gnu.cuda-10.0.cudnn7.6.tar.gz",
help="The tar version of the TensorRT")
args_parser.add_argument("-tv", "--tensorrt-version", type=str, default="7.0.0.11", help="The version of TensorRT")
args_parser.add_argument("-l", "--log-path", type=str, default="log", help="The path of log directory")
args_parser.add_argument('-tm', '--test-model-path', type=str, required=True, help="The path of test model")
args_parser.add_argument("-sj", "--serving-json", type=str, default="serving_model.json", help="The json of model")
args_parser.add_argument("-cis", "--client-inference-script", type=str, required=True, help="The inference script")
args_parser.add_argument("-i", "--image-filename", type=str, required=True, nargs="?", help="Input image.")
args_parser.add_argument("-gl", "--gpu-label", type=int, default=None, help="The GPU label")
args_parser.add_argument("-cs", "--compile-script", type=str, default="compile_script.sh",
help="Compile the model script")
return args_parser.parse_args()


def _get_result(log_path, model_name):
calculate_command = ['python3', os.path.join(os.path.dirname(__file__), 'test_result.py'),
'-c', os.path.join(log_path, 'client_time.log'),
'-s', os.path.join(log_path, 'serving_time.log'),
'-m', model_name]
with subprocess.Popen(calculate_command) as result_process:
print(result_process.stdout)


def _docker_build_command(args):
build_arg = ['--build-arg', f'SERVING_SCRIPT={args.serving_script}',
'--build-arg', f'CLIENT_SCRIPT={args.client_script}',
'--build-arg', f'TEST_MODEL_PATH={args.test_model_path}',
'--build-arg', f'SERVING_JSON={args.serving_json}',
'--build-arg', f'CLIENT_INFERENCE_SCRIPT={args.client_inference_script}',
'--build-arg', f'IMAGE_FILENAME={args.image_filename}',
'--build-arg', f'COMPILE_SCRIPT={args.compile_script}']

if args.serving_type == 'openvino':
build_arg.extend(['--build-arg', f'OPENVINO_VERSION={args.openvino_version}'])
elif args.serving_type == 'tensorrt':
build_arg.extend(['--build-arg', f'TENSORRT_VERSION={args.tensorrt_version}',
'--build-arg', f'TENSORRT_TAR={args.tensorrt_tar}'])
else:
build_arg = build_arg

build_command = ['docker', 'build', '--build-arg', f'ADLIK_DIRECTORY={args.adlik_directory}']
build_command.extend(build_arg)
build_command.extend(['-f', f'{args.docker_file_path}'])
build_command.extend(['-t', f'adlik-test:{args.serving_type}', f'{args.build_directory}'])
return build_command


def main(args):
"""
Automated test runtime performance.
"""

docker_build_command = _docker_build_command(args)

if not args.gpu_label:
docker_run_command = ['docker', 'run', '--rm',
'-v', f'{args.log_path}:/home/john/log',
f'adlik-test:{args.serving_type}']
else:
docker_run_command = [f'NV_GPU={args.gpu_label}',
'nvidia-docker', 'run', '--rm',
'-v', f'{args.log_path}:/home/john/log',
f'adlik-test:{args.serving_type}']

subprocess.run(docker_build_command, check=True)
subprocess.run(docker_run_command, check=True)
_get_result(args.log_path, args.model_name)


if __name__ == '__main__':
main(_parse_arguments())
31 changes: 31 additions & 0 deletions benchmark/src/cmd_script.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Copyright 2019 ZTE corporation. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

"""
The CMD script
"""

import subprocess
import argparse


def _main(args):
compile_command = ['sh', '-c', args.compile_script]
serving_command = ['sh', '-c', args.serving_script]
client_command = ['sh', '-c', args.client_script]
subprocess.run(compile_command)
with subprocess.Popen(serving_command) as process:
subprocess.run(client_command)
process.kill()


if __name__ == '__main__':
ARGS_PARSER = argparse.ArgumentParser()
ARGS_PARSER.add_argument('-s', '--serving-script', type=str, required=True,
help='The serving script')
ARGS_PARSER.add_argument('-c', '--client-script', type=str, required=True,
help='The client script')
ARGS_PARSER.add_argument('-cs', '--compile-script', type=str, required=True,
help='The compile script')
PARSE_ARGS = ARGS_PARSER.parse_args()
_main(PARSE_ARGS)
45 changes: 45 additions & 0 deletions benchmark/src/compile_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Copyright 2019 ZTE corporation. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

"""
Compile the model.
"""

import os
import json
import argparse
import model_compiler # pylint:disable=import-error


def _get_request(request_file, test_model_dir):
request = json.load(request_file)
model_dir = request["input_model"]
request["input_model"] = os.path.join(test_model_dir, model_dir)
export_dir = request["export_path"]
request["export_path"] = os.path.join(test_model_dir, export_dir)
return request


def compile_model(args):
"""
Compile the model.
"""

request_dir = os.path.join(args.test_model_path, args.serving_model_json)
try:
with open(request_dir, 'r') as request_file:
test_model_dir = args.test_model_path
request = _get_request(request_file, test_model_dir)
result = model_compiler.compile_model(request)
print(result)
except FileNotFoundError:
print(f"Can not compile the model in {os.path.join(test_model_dir, args.model_path)}")


if __name__ == '__main__':
ARGS_PARSER = argparse.ArgumentParser()
ARGS_PARSER.add_argument('-t', '--test-model-path', type=str, required=True, help='The path of test model')
ARGS_PARSER.add_argument('-s', '--serving-model-json', type=str, default='serving_model.json',
help='The json of model')
PARSE_ARGS = ARGS_PARSER.parse_args()
compile_model(PARSE_ARGS)
Loading

0 comments on commit 8574df7

Please sign in to comment.