Skip to content

Commit

Permalink
Stable v0.3 merge master (#723)
Browse files Browse the repository at this point in the history
* fix some failures in X86 UnitTest

* opt openvino binary layer impl

* add openvino argmax layer builder

* opt openvino unary layer builder impl

* [DOC] add MatConvertParam description

* [DOC] fix link error

* [CONVERTER2TNN][UPD] 1. default do not clean build directory; (#694)

Co-authored-by: tiankai <tiankai@tencent.com>

* [ARM][BUG] fix upsample cubic openmp too many args (#692)

* [ARM][BUG] fix upsample cubic openmp too many args

* [ARM] fix blob converter unit test on a32

* [ARM] modify arm82 compile option

Co-authored-by: seanxcwang <seanxcwang@tencent.com>
Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com>

* [CONVERTER2TNN][BUG]1.fix bug for convert2tnn build.sh (#700)

Co-authored-by: tiankai <tiankai@tencent.com>

* [TOOLS][BUG] fix model checker bug when only check output

* [ARM][BUG] fp16 reformat support multi-inputs

* [DOC] update model check doc and model_check cmd help info

* [TOOLS][CHG] fix model check info error and modify some return value

* [ANDROID] object ssd demo fix force to use fp32 on huawei npu (#712)

* [BENCHMARK][FIX] support android 64bit force run 32bit (#716)

* Patch benchmark tool java (#719)

* [BENCHMARK][FIX] support android 64bit force run 32bit

* add java error msg deal

Co-authored-by: neiltian <neiltian@tencent.com>

* [SCRIPTS][FIX] 发布代码关闭unit test,unit test会暴露符号,并且会引入generate resource 逻辑 (#703)

* [EXAMPLE][FIX] remove unused cl include

* [UPD]update model type error message (#725)

* 修改了 Tensorflow 模型转换时 "-in" 参数的格式 (#717)

* [CONVERT2TNN][UPD] 1.refactor convert tensorflow model(input_name[1,128,128,3] -> input_name:1,128,128,3);

* [CONVERT2TNN][UPD] 1. update help message;

Co-authored-by: Dandiding <Dandiding@tencent.com>
Co-authored-by: quinnrong94 <quinnrong@tencent.com>
Co-authored-by: lucasktian <lucasktian@tencent.com>
Co-authored-by: tiankai <tiankai@tencent.com>
Co-authored-by: quinnrong94 <67782915+quinnrong94@users.noreply.github.com>
Co-authored-by: seanxcwang <seanxcwang@tencent.com>
Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com>
Co-authored-by: shaundai <shaundai@tencent.com>
Co-authored-by: ShaunDai <66760945+shaundai-tencent@users.noreply.github.com>
Co-authored-by: lnmdlong <lnmdlong@hotmail.com>
Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>
  • Loading branch information
12 people authored Jan 15, 2021
1 parent 38b14ea commit 08c9213
Show file tree
Hide file tree
Showing 12 changed files with 107 additions and 109 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,11 @@ private void init() {
Bundle bundle = intent.getExtras();
String benchmark_dir = bundle.getString(ARGS_INTENT_KEY_BENCHMARK_DIR, "/data/local/tmp/tnn-benchmark/");
String[] load_list = bundle.getStringArray(ARGS_INTENT_KEY_LOAD_LIST);
model = bundle.getString(ARGS_INTENT_KEY_MODEL);
for(String element : load_list) {
FileUtils.copyFile(benchmark_dir + "/" + element, getFilesDir().getAbsolutePath() + "/" + element);
System.load(getFilesDir().getAbsolutePath() + "/" + element);
}
model = bundle.getString(ARGS_INTENT_KEY_MODEL);
final String args = bundle.getString(ARGS_INTENT_KEY_ARGS_0, bundle.getString(ARGS_INTENT_KEY_ARGS_1));
final String file_dir = this.getFilesDir().getAbsolutePath();
String output_path = file_dir + "/" + model;
Expand All @@ -57,8 +57,8 @@ private void init() {
if(result != 0) {
Log.i("tnn", String.format(" %s TNN Benchmark time cost failed error code: %d \n", model , result));
}
} catch(Exception e) {
Log.i("tnn", String.format(" %s TNN Benchmark time cost failed exception: %s \n", model, e.getMessage()));
} catch(Error | Exception e) {
Log.i("tnn", String.format(" %s TNN Benchmark time cost failed error/exception: %s \n", model, e.getMessage()));
}
}

Expand Down
6 changes: 5 additions & 1 deletion benchmark/benchmark_android/benchmark_models.sh
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,11 @@ function bench_android_app() {
build_android_bench
build_android_bench_app

adb install -r benchmark-release.apk
if [ "$ABI" = "armeabi-v7a with NEON" ];then
adb install -r --abi armeabi-v7a benchmark-release.apk
else
adb install -r --abi $ABI benchmark-release.apk
fi

$ADB shell "mkdir -p $ANDROID_DIR/benchmark-model"
$ADB push ${BENCHMARK_MODEL_DIR} $ANDROID_DIR
Expand Down
63 changes: 27 additions & 36 deletions doc/cn/user/convert.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,36 +103,34 @@ docker run -it tnn-convert:latest python3 ./converter.py tf2tnn -h
```
得到的输出信息如下:
``` text
usage: convert tf2tnn [-h] -tp TF_PATH -in input_name -on output_name
[-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half]
usage: convert tf2tnn [-h] -tp TF_PATH -in input_info [input_info ...] -on output_name [output_name ...] [-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half] [-align] [-input_file INPUT_FILE_PATH]
[-ref_file REFER_FILE_PATH]
optional arguments:
-h, --help show this help message and exit
-tp TF_PATH the path for tensorflow graphdef file
-in input_name the tensorflow model's input names. If batch is not
specified, you can add input shape after the input
name, e.g. -in "name[1,28,28,3]"
-on output_name the tensorflow model's output name
-in input_info [input_info ...]
specify the input name and shape of the model. e.g., -in input1_name:1,128,128,3 input2_name:1,256,256,3
-on output_name [output_name ...]
the tensorflow model's output name. e.g. -on output_name1 output_name2
-o OUTPUT_DIR the output tnn directory
-v v1.0 the version for model
-optimize optimize the model
-half optimize the model
-align align the onnx model with tnn model
-input_file INPUT_FILE_PATH
the input file path which contains the input data for
the inference model.
the input file path which contains the input data for the inference model.
-ref_file REFER_FILE_PATH
the reference file path which contains the reference
data to compare the results.
the reference file path which contains the reference data to compare the results.
```
通过上面的输出,可以发现针对 TF 模型的转换,convert2tnn 工具提供了很多参数,我们一次对下面的参数进行解释:

- tp 参数(必须)
通过 “-tp” 参数指定需要转换的模型的路径。目前只支持单个 TF模型的转换,不支持多个 TF 模型的一起转换。
- in 参数(必须)
通过 “-in” 参数指定模型输入的名称,输入的名称需要放到“”中,例如,-in "name"。如果模型有多个输入,请使用 “;”进行分割。有的 TensorFlow 模型没有指定 batch 导致无法成功转换为 ONNX 模型,进而无法成功转换为 TNN 模型。你可以通过在名称后添加输入 shape 进行指定。shape 信息需要放在 [] 中。例如:-in "name[1,28,28,3]"
通过 “-in” 参数指定模型输入,例如:-in input_name_1:1,128,128,3 input_name_2:1,256,256,3
- on 参数(必须)
通过 “-on” 参数指定模型输出的名称,如果模型有多个输出,请使用 “;”进行分割
通过 “-on” 参数指定模型输出的名称,例如: -on output_name1 output_name2
- output_dir 参数:
可以通过 “-o <path>” 参数指定输出路径,但是在 docker 中我们一般不使用这个参数,默认会将生成的 TNN 模型放在当前和 TF 模型相同的路径下。
- optimize 参数(可选)
Expand All @@ -156,8 +154,8 @@ optional arguments:
``` shell script
docker run --volume=$(pwd):/workspace -it tnn-convert:latest python3 ./converter.py tf2tnn \
-tp /workspace/test.pb \
-in "input0[1,32,32,3];input1[1,32,32,3]" \
-on output0 \
-in "input0:1,32,32,3 input2:1,32,32,3" \
-on output0 output1 \
-v v2.0 \
-optimize \
-align \
Expand Down Expand Up @@ -332,31 +330,26 @@ python3 converter.py onnx2tnn -h
```
usage 信息如下:
```text
usage: convert onnx2tnn [-h] [-in input_name [input_name ...]] [-optimize]
[-half] [-v v1.0.0] [-o OUTPUT_DIR] [-align]
[-input_file INPUT_FILE_PATH]
[-ref_file REFER_FILE_PATH]
usage: convert onnx2tnn [-h] [-in input_info [input_info ...]] [-optimize] [-half] [-v v1.0.0] [-o OUTPUT_DIR] [-align]
[-input_file INPUT_FILE_PATH] [-ref_file REFER_FILE_PATH]
onnx_path
positional arguments:
onnx_path the path for onnx file
optional arguments:
-h, --help show this help message and exit
-in input_name [input_name ...]
specify the input name and shape of the model. e.g.,
-in in1:1,3,8,8 in2:1,8
-in input_info [input_info ...]
specify the input name and shape of the model. e.g., -in input1_name:1,3,8,8 input2_name:1,8
-optimize optimize the model
-half save model using half
-v v1.0.0 the version for model
-o OUTPUT_DIR the output tnn directory
-align align the onnx model with tnn model
-input_file INPUT_FILE_PATH
the input file path which contains the input data for
the inference model.
the input file path which contains the input data for the inference model.
-ref_file REFER_FILE_PATH
the reference file path which contains the reference
data to compare the results.
the reference file path which contains the reference data to compare the results.
```
示例:
```shell script
Expand Down Expand Up @@ -439,28 +432,26 @@ python3 converter.py caffe2tnn \
python3 converter.py tf2tnn -h
```
usage 信息如下:
```
usage: convert tf2tnn [-h] -tp TF_PATH -in input_name -on output_name
[-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half]
```text
usage: convert tf2tnn [-h] -tp TF_PATH -in input_info [input_info ...] -on output_name [output_name ...] [-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half] [-align] [-input_file INPUT_FILE_PATH]
[-ref_file REFER_FILE_PATH]
optional arguments:
-h, --help show this help message and exit
-tp TF_PATH the path for tensorflow graphdef file
-in input_name the tensorflow model's input names. If batch is not
specified, you can add input shape after the input
name, e.g. -in "name[1,28,28,3]"
-on output_name the tensorflow model's output name
-in input_info [input_info ...]
specify the input name and shape of the model. e.g., -in input1_name:1,128,128,3 input2_name:1,256,256,3
-on output_name [output_name ...]
the tensorflow model's output name. e.g. -on output_name1 output_name2
-o OUTPUT_DIR the output tnn directory
-v v1.0 the version for model
-optimize optimize the model
-half optimize the model
-align align the onnx model with tnn model
-input_file INPUT_FILE_PATH
the input file path which contains the input data for
the inference model.
the input file path which contains the input data for the inference model.
-ref_file REFER_FILE_PATH
the reference file path which contains the reference
data to compare the results.
the reference file path which contains the reference data to compare the results.
```
- tensorflow-lite2tnn
Expand Down
67 changes: 39 additions & 28 deletions doc/en/user/convert_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,31 +103,34 @@ docker run -it tnn-convert:latest python3 ./converter.py tf2tnn -h
```
The output shows below:
``` text
usage: convert tf2tnn [-h] -tp TF_PATH -in input_name -on output_name
[-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half]
usage: convert tf2tnn [-h] -tp TF_PATH -in input_info [input_info ...] -on output_name [output_name ...] [-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half] [-align] [-input_file INPUT_FILE_PATH]
[-ref_file REFER_FILE_PATH]
optional arguments:
-h, --help show this help message and exit
-tp TF_PATH the path for tensorflow graphdef file
-in input_name the tensorflow model's input names
-on output_name the tensorflow model's output name
-o OUTPUT_DIR the output tnn directory
-v v1.0 the version for model
-optimize optimize the model
-half optimize the model
-align align the onnx model with tnn model
-fold_const enable tf constant_folding transformation before conversion
-input_file the input file path which contains the input data for the inference model
-ref_file the reference file path which contains the reference data to compare the results
-h, --help show this help message and exit
-tp TF_PATH the path for tensorflow graphdef file
-in input_info [input_info ...]
specify the input name and shape of the model. e.g., -in input1_name:1,128,128,3 input2_name:1,256,256,3
-on output_name [output_name ...]
the tensorflow model's output name. e.g. -on output_name1 output_name2
-o OUTPUT_DIR the output tnn directory
-v v1.0 the version for model
-optimize optimize the model
-half optimize the model
-align align the onnx model with tnn model
-input_file INPUT_FILE_PATH
the input file path which contains the input data for the inference model.
-ref_file REFER_FILE_PATH
the reference file path which contains the reference data to compare the results.
```
Here are the explanations for each parameter:

- tp parameter (required)
Use the "-tp" parameter to specify the path of the model to be converted. Currently only supports the conversion of a single TF model, does not support the conversion of multiple TF models together.
- in parameter (required)
Specify the name of the model input through the "-in" parameter. If the model has multiple inputs, use ";" to split. Some models specify placeholders with unknown ranks and dims which can not be mapped to onnx. In those cases one can add the shape after the input name inside [], for example -in name[1,28,28,3]
Specify the name of the model input through the "-in" parameter, for example "-in input1_name:1,128,128,3 input2_name:1,256,256,3"
- on parameter (required)
Specify the name of the model output through the "-on" parameter. If the model has multiple outputs, use ";" to split
Specify the name of the model output through the "-on" parameter, for example "-on output_name1 output_name2"
- output_dir parameter:
You can specify the output path through the "-o <path>" parameter, but we generally do not apply this parameter in docker. By default, the generated TNN model will be placed in the same path as the TF model.
- optimize parameter (optional)
Expand Down Expand Up @@ -317,21 +320,26 @@ python3 converter.py onnx2tnn -h
```
usage information:
```text
usage: convert onnx2tnn [-h] [-optimize] [-half] [-v v1.0.0] [-o OUTPUT_DIR]
usage: convert onnx2tnn [-h] [-in input_info [input_info ...]] [-optimize] [-half] [-v v1.0.0] [-o OUTPUT_DIR] [-align]
[-input_file INPUT_FILE_PATH] [-ref_file REFER_FILE_PATH]
onnx_path
positional arguments:
onnx_path the path for onnx file
onnx_path the path for onnx file
optional arguments:
-h, --help show this help message and exit
-in input_info [input_info ...]
specify the input name and shape of the model. e.g., -in input1_name:1,3,8,8 input2_name:1,8
-optimize optimize the model
-half save model using half
-v v1.0.0 the version for model
-o OUTPUT_DIR the output tnn directory
-align align the onnx model with tnn model
-input_file in.txt the input file path which contains the input data for the inference model
-ref_file ref.txt the reference file path which contains the reference data to compare the results
-input_file INPUT_FILE_PATH
the input file path which contains the input data for the inference model.
-ref_file REFER_FILE_PATH
the reference file path which contains the reference data to compare the results.
```
Example:
```shell script
Expand Down Expand Up @@ -397,23 +405,26 @@ The current convert2tnn model only supports the graphdef model, but does not sup
python3 converter.py tf2tnn -h
```
usage information:
```
usage: convert tf2tnn [-h] -tp TF_PATH -in input_name -on output_name
[-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half]
```text
usage: convert tf2tnn [-h] -tp TF_PATH -in input_info [input_info ...] -on output_name [output_name ...] [-o OUTPUT_DIR] [-v v1.0] [-optimize] [-half] [-align] [-input_file INPUT_FILE_PATH]
[-ref_file REFER_FILE_PATH]
optional arguments:
-h, --help show this help message and exit
-tp TF_PATH the path for tensorflow graphdef file
-in input_name the tensorflow model's input names
-on output_name the tensorflow model's output name
-in input_info [input_info ...]
specify the input name and shape of the model. e.g., -in input1_name:1,128,128,3 input2_name:1,256,256,3
-on output_name [output_name ...]
the tensorflow model's output name. e.g. -on output_name1 output_name2
-o OUTPUT_DIR the output tnn directory
-v v1.0 the version for model
-optimize optimize the model
-half optimize the model
-align align the onnx model with tnn model
-fold_const enable tf constant_folding transformation before conversion
-input_file in.txt the input file path which contains the input data for the inference model
-ref_file out.txt the reference file path which contains the reference data to compare the results
-input_file INPUT_FILE_PATH
the input file path which contains the input data for the inference model.
-ref_file REFER_FILE_PATH
the reference file path which contains the reference data to compare the results.
```
Example:
```shell script
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,6 @@ TNN_OBJECT_DETECTORSSD(init)(JNIEnv *env, jobject thiz, jstring modelPath, jint
//add for huawei_npu store the om file
LOGI("the device type %d device huawei_npu", gComputeUnitType);
option->compute_units = TNN_NS::TNNComputeUnitsHuaweiNPU;
// skip low precision for tmp on npu, need to fix
option->precision = TNN_NS::PRECISION_HIGH;
gDetector->setNpuModelPath(modelPathStr + "/");
gDetector->setCheckNpuSwitch(false);
status = gDetector->Init(option);
Expand Down
2 changes: 0 additions & 2 deletions examples/android/demo/src/main/jni/cc/tnn_lib.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@
#define CL_HPP_TARGET_OPENCL_VERSION 110
#define CL_HPP_MINIMUM_OPENCL_VERSION 110

#include "CL/cl2.hpp"

class TNNLib {
public:
TNNLib();
Expand Down
1 change: 0 additions & 1 deletion scripts/build_aarch64_macos.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ cd build_aarch64_macos

cmake ${TNN_ROOT_PATH} \
-DTNN_TEST_ENABLE=ON \
-DTNN_UNIT_TEST_ENABLE=ON \
-DCMAKE_BUILD_TYPE=Release \
-DTNN_ARM_ENABLE:BOOL=$ARM \
-DTNN_ARM82_ENABLE:BOOL=$ARM82 \
Expand Down
1 change: 0 additions & 1 deletion scripts/build_armhf_linux.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ cmake ${TNN_ROOT_PATH} \
-DTNN_ARM_ENABLE:BOOL=$ARM \
-DTNN_OPENMP_ENABLE:BOOL=$OPENMP \
-DTNN_OPENCL_ENABLE:BOOL=$OPENCL \
-DTNN_UNIT_TEST_ENABLE=OFF \
-DCMAKE_SYSTEM_PROCESSOR=$TARGET_ARCH \
-DTNN_BUILD_SHARED:BOOL=$SHARED_LIB

Expand Down
1 change: 0 additions & 1 deletion scripts/build_cuda_linux.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ cmake ${TNN_ROOT_PATH} \
-DTNN_OPENMP_ENABLE=OFF \
-DTNN_OPENCL_ENABLE=OFF \
-DTNN_QUANTIZATION_ENABLE=OFF \
-DTNN_UNIT_TEST_ENABLE=ON \
-DTNN_COVERAGE=OFF \
-DTNN_BENCHMARK_MODE=OFF \
-DTNN_BUILD_SHARED=ON \
Expand Down
4 changes: 2 additions & 2 deletions source/tnn/core/tnn.cc
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ TNN::~TNN() {
Status TNN::Init(ModelConfig& config) {
impl_ = TNNImplManager::GetTNNImpl(config.model_type);
if (!impl_) {
LOGE("Error: not support mode type: %d\n", config.model_type);
return Status(TNNERR_NET_ERR, "not support mode type");
LOGE("Error: not support mode type: %d. If TNN is a static library, link it with option -Wl,--whole-archive tnn -Wl,--no-whole-archive on android or add -force_load on iOS\n", config.model_type);
return Status(TNNERR_NET_ERR, "unsupport mode type, If TNN is a static library, link it with option -Wl,--whole-archive tnn -Wl,--no-whole-archive on android or add -force_load on iOS");
}
return impl_->Init(config);
}
Expand Down
Loading

0 comments on commit 08c9213

Please sign in to comment.