Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Signficant differences between measurements and predictions on Pixel 4 #83

Closed
165749 opened this issue Oct 20, 2022 · 7 comments
Closed

Comments

@165749
Copy link

165749 commented Oct 20, 2022

Hi,

I am trying to reproduce the results of nn-Meter by comparing the measurements on Pixel 4 with the results from the pre-trained predictors (i.e., cortexA76cpu_tflite21 and adreno640gpu_tflite21). However, I observed significant differences between the measurements and predictions.

I converted the provided TensorFlow pb models (i.e., pb_models) into .tflite format, and built the binary benchmark_model from TFLite v2.1 source. I benchmarked all the models with the following commands on Pixel 4 (Snapdragon 855 and Adreno 640):

# For CPUs:
/data/local/tmp/benchmark_model --warmup_runs=10 --num_runs=10 --num_threads=1 --graph=${path}

# For GPUs:
/data/local/tmp/benchmark_model --warmup_runs=10 --num_runs=10 --use_gpu=true --graph=${path}

For example, the measurement of resnet18_0 on CPU shows:

$ /data/local/tmp/benchmark_model --warmup_runs=10 --num_runs=10 --num_threads=1 --graph=${path}
STARTING
...
Loaded model /data/local/tmp/output/tflite-pb-tf21/resnet18_0.tflite
resolved reporter
INFO: Initialized TensorFlow Lite runtime.
Initialized session in 0.732ms
[Init Phase] - Memory usage: max resident set size = 3.07422 MB, total malloc-ed size = 14.5485 MB
[Init Phase] - Memory usage: max resident set size = 3.07422 MB, total malloc-ed size = 14.5485 MB
Running benchmark for at least 10 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
count=10 first=150351 curr=119397 min=119349 max=150351 avg=122502 std=9282

Running benchmark for at least 10 iterations and at least 1 seconds but terminate if exceeding 150 seconds.
count=10 first=119529 curr=119341 min=119341 max=119529 avg=119410 std=53

[Overall] - Memory usage: max resident set size = 71.4961 MB, total malloc-ed size = 31.3739 MB
Average inference timings in us: Warmup: 122502, Init: 732, no stats: 119410

but the prediction on cortexA76cpu_tflite21 is:

...
(nn-Meter) Get weight shape of fc13.fc/MatMul from ['fc13.fc/weight'], input shape:[512, 1000].
(nn-Meter) Get input shape of fc13.fc/MatMul from Reshape, input shape:[-1, 512].
(nn-Meter) Input shape of fc13.fc/MatMul op is [[-1, 512]].
(nn-Meter) Output shape of fc13.fc/MatMul op is [[-1, 1000]].
(nn-Meter) Predict latency: 216.19714599005837 ms
resnet18_0,216.19714599005837

with error 81% (i.e., 216.19 v.s. 119.41).

Similarly, for resnet50_0 on GPU, the measurement is:

$ /data/local/tmp/benchmark_model --warmup_runs=10 --num_runs=10 --use_gpu=true --graph=${path}
STARTING!
...
Loaded model /data/local/tmp/output/tflite-pb-tf21/resnet50_0.tflite
resolved reporter
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Next operations are not supported by GPU delegate:
MEAN: Operation is not supported.
First 70 operations will run on the GPU, and the remaining 2 on the CPU.
INFO: Initialized OpenCL-based API.
Applied GPU delegate.
Initialized session in 665.544ms
[Init Phase] - Memory usage: max resident set size = 274.34 MB, total malloc-ed size = 1.32245 MB
Running benchmark for at least 10 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
count=10 first=51879 curr=58338 min=43539 max=58484 avg=55702.5 std=4507

Running benchmark for at least 10 iterations and at least 1 seconds but terminate if exceeding 150 seconds.
count=18 first=58433 curr=58263 min=56980 max=59873 avg=58350.9 std=674

[Overall] - Memory usage: max resident set size = 274.34 MB, total malloc-ed size = 1.90115 MB
Average inference timings in us: Warmup: 55702.5, Init: 665544, no stats: 58350.9

and the predictor produces the following:

...
(nn-Meter) Find node fc21.fc/MatMul with its weight op fc21.fc/weight.
(nn-Meter) Get weight shape of fc21.fc/MatMul from ['fc21.fc/weight'], input shape:[2048, 1000].
(nn-Meter) Get input shape of fc21.fc/MatMul from Reshape, input shape:[-1, 2048].
(nn-Meter) Input shape of fc21.fc/MatMul op is [[-1, 2048]].
(nn-Meter) Output shape of fc21.fc/MatMul op is [[-1, 1000]].
(nn-Meter) Predict latency: 91.73126828870865 ms
resnet50_0,91.73126828870865

with error 57% (i.e., 91.73 v.s. 58.35).

I am wondering whether I set up the same experimental environment as the one for training the predictors. I can provide more information (e.g., the tflite models) if needed and look into the issue further.

Thank you!

@AIxyz
Copy link

AIxyz commented Oct 21, 2022

I guess --warmup_runs and --num_runs are causing the problem. In fact, when debugging the code I found that when training the predictor nn_meter/builder/backends/tflite/tflite_profiler.py adb.shell executes a command like "taskset 70 <path/to/benchmark_ model model> --num_threads=1 --num_runs=50 --warmup_runs=10 --graph=<path/to/tflite/model> --enable_op_profiling=true --use_gpu=false"

By the way, my attempt to “nn-meter predict --predictor cortexA76cpu_tflite21 --predictor-version 1.0 --tensorflow pb_models/resnet18_0.pb” reported the error as follows (resnet18_0.pb was from pb_models):

(nn-Meter) Failed to get shape of node conv1.conv/Conv2D.
(nn-Meter) {'inbounds': ['input_im_0', 'conv1.conv/weight/read'], 'attr': {'name': 'conv1.conv/Conv2D', 'type': 'Conv2D', 'output_shape': [], 'attr': {'dilations': [1, 1, 1, 1], 'strides': [1, 2, 2, 1], 'data_format': b'NHWC', 'padding': b'SAME'}}, 'outbounds': ['conv1.bn.batchnorm/BatchNorm/FusedBatchNormV3']}
Traceback (most recent call last):
File "/home/john/Python/env_py36X/bin/nn-meter", line 8, in
sys.exit(nn_meter_cli())
File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/utils/nn_meter_cli/interface.py", line 266, in nn_meter_cli
args.func(args)
File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/utils/nn_meter_cli/predictor.py", line 56, in apply_latency_predictor_cli
latency = predictor.predict(model, model_type) # in unit of ms
File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/predictor/nn_meter_predictor.py", line 106, in predict
graph = model_file_to_graph(model, model_type, input_shape, apply_nni=apply_nni)
File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/ir_converter/utils.py", line 42, in model_file_to_graph
converter = FrozenPbConverter(filename)
File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/ir_converter/frozenpb_converter/frozenpb_converter.py", line 23, in init
ShapeInference(self.model_graph, dynamic_fetcher)
File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/ir_converter/frozenpb_converter/shape_inference.py", line 943, in init
graph, graph[node_name]
TypeError: 'NoneType' object is not iterable

Could you use "python3 -m pip freeze" to show the package version. And I want to know the code you use to convert .pb into .tflite.

Thanks & Regards

@165749
Copy link
Author

165749 commented Oct 21, 2022

Hi @AIxyz . In my observation, using a larger value of --num_runs=xx may help to reduce the variance of the measurements but not really substantially change the average. Thanks for your reminder - I didn’t realize that they set CPU affinity with taskset 70 when collecting training data for CPU experiments. I repeated my experiments by setting the same CPU affinity and found that the measured end-to-end latency of resnet18_0 is now 137.14ms (still a substantial error compared to the prediction of 182.79ms):

# taskset 70 /data/local/tmp/benchmark_model --warmup_runs=10 --num_runs=50 --num_threads=1 --graph=${path}
...
Loaded model /data/local/tmp/output/tflite-pb-tf21/resnet18_0.tflite
resolved reporter
Initialized session in 0.682ms
[Init Phase] - Memory usage: max resident set size = 3.12109 MB, total malloc-ed size = 14.6232 MB
Running benchmark for at least 10 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
count=10 first=151057 curr=136984 min=136418 max=151057 avg=138180 std=4295

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.
count=50 first=136997 curr=137191 min=136834 max=138724 avg=137144 std=264

[Overall] - Memory usage: max resident set size = 71.4961 MB, total malloc-ed size = 31.4656 MB
Average inference timings in us: Warmup: 138180, Init: 682, no stats: 137144

For running predictors, I only installed the source code pip install . as well as tensorflow==2.6.0 (with python 3.8):

$ nn-meter predict --predictor cortexA76cpu_tflite21 --predictor-version 1.0 --tensorflow resnet18_0.pb
...
(nn-Meter) Input shape of fc13.fc/MatMul op is [[-1, 512]].
(nn-Meter) Output shape of fc13.fc/MatMul op is [[-1, 1000]].
(nn-Meter) Predict latency: 216.19714599005837 ms
(nn-Meter) [RESULT] predict latency for resnet18_0.pb: 216.19714599005837 ms

Regarding the code for converting .pb to .tflite, since the provided .pb models are generated in TF1, I have to use the API from TF1 tf.compat.v1.lite.TFLiteConverter.from_frozen_graph:

input_path = f"{name}.pb”
tflite_path = f"{name}.tflite”
name_to_output ={
    'alexnet_0': 'fc3.fc/MatMul’,
    'densenet_0': 'fc74.fc/MatMul’,
    'googlenet_0': 'fc3.fc/MatMul’,
    'mnasnet_0': 'fc20.fc/MatMul’,
    'mobilenetv1_0': 'fc3.fc/MatMul’,
    'mobilenetv2_0': 'fc20.fc/MatMul’,
    'mobilenetv3large_0': 'fc19.fc/MatMul’,
    'mobilenetv3small_0': 'fc15.fc/MatMul’,
    'proxylessnas_0': 'fc24.fc/MatMul’,
    'resnet18_0': 'fc13.fc/MatMul’,
    'resnet34_0': 'fc21.fc/MatMul’,
    'resnet50_0': 'fc21.fc/MatMul’,
    'shufflenetv2_0': 'fc19.fc/MatMul’,
    'squeezenet_0': 'fc3.fc/MatMul’,
    'vgg11_0': 'fc3.fc/MatMul’,
    'vgg13_0': 'fc3.fc/MatMul’,
    'vgg16_0': 'fc3.fc/MatMul’,
    'vgg19_0': 'fc3.fc/MatMul’,
}

converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
    input_path, input_arrays=["input_im_0"], output_arrays=[name_to_output[name]])
tflite_model = converter.convert()
open(tflite_path, "wb").write(tflite_model)

@JiahangXu
Copy link
Collaborator

Hi,

Thanks for your interests of nn-Meter. There are many factors that can affect the measurement latency, such as the CPU affinity, CPU frequency, and the compiling benchmark model. Maybe you could try to apply our provided benchmark models to profile models or set a fixed CPU frequency for the calling CPU cores by commands like:

cd sys/devices/system/cpu/cpu4/cpufreq
echo "userspace" > scaling_governor
echo 2419200 > scaling_max_freq
echo 2419200 > scaling_min_freq
echo 2419200 > scaling_setspeed

I hope the information could help you get the consistent latency with our predicted values.

And hi @AIxyz , thanks for join the discussion! I try to reproduce the bug you mentioned by running nn-meter predict --predictor cortexA76cpu_tflite21 --predictor-version 1.0 --tensorflow pb_models/resnet18_0.pb but I got no error. My message is shown as follows:

(nn-Meter) checking local kernel predictors at /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/relu.pkl
/home/jiahang/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator DecisionTreeRegressor from version 0.23.1 when using version 1.1.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
/home/jiahang/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator RandomForestRegressor from version 0.23.1 when using version 1.1.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/addrelu.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/bn.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/concat.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/avgpool.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/bnrelu.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/maxpool.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/split.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/hswish.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/conv-bn-relu.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/se.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/channelshuffle.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/global-avgpool.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/fc.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/dwconv-bn-relu.pkl
(nn-Meter) load predictor /data/jiahang/working/nnmeterdata/predictor/cortexA76cpu_tflite21/add.pkl
(nn-Meter) Start latency prediction ...
2022-10-23 11:50:45.048555: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-10-23 11:50:45.048624: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
(nn-Meter) Input shape of fc13.fc/weight op is [].
(nn-Meter) Output shape of fc13.fc/weight op is [[512, 1000]].
(nn-Meter) Input shape of fc13.fc/weight/read op is [].

...

(nn-Meter) Input shape of fc13.fc/MatMul op is [[-1, 512]].
(nn-Meter) Output shape of fc13.fc/MatMul op is [[-1, 1000]].
(nn-Meter) Predict latency: 216.19714599005837 ms
(nn-Meter) [RESULT] predict latency for resnet18_0.pb: 216.19714599005837 ms

Could you please provide the environment information, such as the version of python/nn-meter/tensorflow, to help me debug this? Thanks a lot!

Best regards,
Jiahang

@165749
Copy link
Author

165749 commented Oct 23, 2022

@JiahangXu Thanks for sharing more information about the experimental setup. For fixing CPU frequency, I wonder whether it needs to first root the phone to acquire permission.
I tried the customized benchmark tool (i.e., benchmark_model_cpu_v2.1 and benchmark_model_gpu_v2.1); it turned out that the measurements are much closer to the predictions for both CPUs and GPUs. I understand that the predictors are possibly trained on the data collected from the customized benchmark tool. However, I don’t quite understand why there is a huge gap between the measurements from the customized tool and the one built from TF source (both in TFLite v2.1). Particularly, the end-to-end latencies on CPU measured from the customized tool are always higher (as shown in the table below). Could you provide some insights about the difference? For example, did you modify the source code based on branch v2.1.0? Did you change any important component in TFLite which can potentially affect the performance in your customized code? Thank you!

CPU, TFLite v2.1

Model Measure (Source) Measure (Customized) Prediction
alexnet_0 53052 78515 84794
densenet_0 249067 346175 360042
googlenet_0 111755 168581 177484
mnasnet_0 35028 56792 51644
mobilenetv1_0 43295 67463 66980
mobilenetv2_0 30212 49380 45951
mobilenetv3large_0 24612 37526 35451
mobilenetv3small_0 8548 12541 12559
proxylessnas_0 33052 51811 51633
resnet18_0 137144 209733 216197
resnet34_0 266756 398761 422993
resnet50_0 284198 446612 452728
squeezenet_0 61544 90353 97732
vgg11_0 483518 760352 786351
vgg13_0 759010 1185350 1212278
vgg16_0 1017060 1578400 1633950
vgg19_0 1273210 1971940 2055622

@JiahangXu
Copy link
Collaborator

Hi,

Our customized benchmark models are compiled in about year 2020. We have not specified to use Aarch64 for model profiling, which is an optional argument at that moment. After the code of nn-Meter released, we found that the compiled benchmark model without Aarch64 will cause all profiling latency larger than that with Aarch64. However, for the consistency with our paper, we didn't modify our released benchmark model.
We checked the official doc of TFLite benchmark model and notice the use of Aarch64 is a default setting by now. So I think maybe this is the reason of your observation.
We will note this information in our doc about our customized benchmark model, thanks for raising this question!

Best regards,
Jiahang

@AIxyz
Copy link

AIxyz commented Oct 31, 2022

Hi, @JiahangXu , this is my logs in docker container from ubuntu:focal (3bc6e9f30f51), but now I can't repeat the error in a new container. It's like everything suddenly got better, but the result of "python3 -m pip freeze" didn't change. And 'git log' in nn-Meter as follows:

commit 4c10c002715e6df4649f015c9187dc8b28bfcc11 (HEAD -> xyz, origin/dev/dataset-generator, main)
Author: Jiahang Xu <jiahangxu@microsoft.com>
Date:   Fri Jul 1 11:07:10 2022 +0800

    Update version info after v2.0 (#75)

'diff -r /home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter /home/john/Work/nn-Meter/nn_meter' only show the pychche.

My container logs as follows:

root@53ad9dd671a5:/home/john/Projects/pb_models# nn-meter get_ir --tensorflow resnet18_0.pb -o resnet18_0.nnmir.json
2022-10-31 10:35:33.852847: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-10-31 10:35:33.852902: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
(nn-Meter) Input shape of fc13.fc/weight op is [].
(nn-Meter) Output shape of fc13.fc/weight op is [[512, 1000]].
(nn-Meter) Input shape of fc13.fc/weight/read op is [].
(nn-Meter) Output shape of fc13.fc/weight/read op is [[512, 1000]].
# ……………………………………………………………………………………………………………………………………………………………………………………………
(nn-Meter) Input shape of input_im_0 op is [].
(nn-Meter) Output shape of input_im_0 op is [[1, 224, 224, 3]].
(nn-Meter) Failed to get shape of node conv1.conv/Conv2D.
(nn-Meter) {'inbounds': ['input_im_0', 'conv1.conv/weight/read'], 'attr': {'name': 'conv1.conv/Conv2D', 'type': 'Conv2D', 'output_shape': [], 'attr': {'dilations': [1, 1, 1, 1], 'strides': [1, 2, 2, 1], 'data_format': b'NHWC', 'padding': b'SAME'}}, 'outbounds': ['conv1.bn.batchnorm/BatchNorm/FusedBatchNormV3']}
Traceback (most recent call last):
  File "/home/john/Python/env_py36X/bin/nn-meter", line 8, in <module>
    sys.exit(nn_meter_cli())
  File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/utils/nn_meter_cli/interface.py", line 266, in nn_meter_cli
    args.func(args)
  File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/utils/nn_meter_cli/predictor.py", line 69, in get_nnmeter_ir_cli
    graph = model_file_to_graph(args.tensorflow, 'pb')
  File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/ir_converter/utils.py", line 42, in model_file_to_graph
    converter = FrozenPbConverter(filename)
  File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/ir_converter/frozenpb_converter/frozenpb_converter.py", line 23, in __init__
    ShapeInference(self.model_graph, dynamic_fetcher)
  File "/home/john/Python/env_py36X/lib/python3.6/site-packages/nn_meter/ir_converter/frozenpb_converter/shape_inference.py", line 943, in __init__
    graph, graph[node_name]
TypeError: 'NoneType' object is not iterable
root@53ad9dd671a5:/home/john/Projects/nn_meter_cli/WorkSpace/pb_models# 
root@53ad9dd671a5:/home/john/Projects/nn_meter_cli/WorkSpace/pb_models# 
root@53ad9dd671a5:/home/john/Projects/nn_meter_cli/WorkSpace/pb_models# 
root@53ad9dd671a5:/home/john/Projects/nn_meter_cli/WorkSpace/pb_models# python3 --version
Python 3.6.10 :: Anaconda, Inc.
root@53ad9dd671a5:/home/john/Projects/nn_meter_cli/WorkSpace/pb_models# cd
root@53ad9dd671a5:~# 
root@53ad9dd671a5:~# 
root@53ad9dd671a5:~# 
root@53ad9dd671a5:~# python3 --version
Python 3.6.10 :: Anaconda, Inc.
root@53ad9dd671a5:~# python3 -m pip freeze
absl-py==0.15.0
astor==0.8.1
astunparse==1.6.3
attrs==22.1.0
cached-property==1.5.2
cachetools==4.2.4
certifi==2021.5.30
charset-normalizer==2.0.12
clang==5.0
cloudpickle==2.1.0
colorama==0.4.5
contextlib2==21.6.0
dataclasses==0.8
decorator==4.4.2
filelock==3.3.2
flatbuffers==1.12
future==0.18.2
gast==0.4.0
google-auth==2.10.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.47.0
h5py==3.1.0
hyperopt==0.1.2
idna==3.3
importlib-metadata==4.8.3
importlib-resources==5.4.0
joblib==1.1.0
json-tricks==3.15.5
jsonlines==3.1.0
keras==2.6.0
Keras-Preprocessing==1.1.2
Markdown==3.3.7
networkx==2.5.1
nn-meter @ file:///home/john/Work/nn-Meter
nni==2.6.1
numpy==1.19.5
oauthlib==3.2.0
onnx==1.9.0
onnx-simplifier==0.3.6
onnxoptimizer==0.2.7
onnxruntime==1.10.0
opt-einsum==3.3.0
packaging==21.3
pandas==1.1.5
Pillow==8.4.0
prettytable==2.5.0
protobuf==3.19.4
psutil==5.9.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pymongo==4.1.1
pyparsing==3.0.9
python-dateutil==2.8.2
PythonWebHDFS==0.2.3
pytz==2022.2
PyYAML==6.0
requests==2.27.1
requests-oauthlib==1.3.1
responses==0.17.0
rsa==4.9
schema==0.7.5
scikit-learn==0.24.2
scipy==1.5.4
simplejson==3.17.6
six==1.15.0
tensorboard==2.10.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.6.0
tensorflow-estimator==2.8.0
termcolor==1.1.0
threadpoolctl==3.1.0
torch==1.9.0
torchvision==0.10.0
tqdm==4.64.0
typeguard==2.13.3
typing-extensions==3.7.4.3
urllib3==1.26.11
wcwidth==0.2.5
websockets==9.1
Werkzeug==2.0.3
wrapt==1.12.1
zipp==3.6.0

Thanks & Regards

@165749
Copy link
Author

165749 commented Oct 31, 2022

@JiahangXu Thanks for the clarification.

@165749 165749 closed this as completed Oct 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants