-
Notifications
You must be signed in to change notification settings - Fork 771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[UPD]update model type error message #725
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Conflicts: # examples/ios/TNNExamples/TNNCameraPreviewController/TNNViewModel/TNNFaceDetectAlignerViewModel.mm # model/download_model.sh
* 'master' of https://github.com/darrenyao87/TNN: Feature rknpu support (#387) Feature arm yuv2bgra (#381)
# Conflicts: # examples/ios/TNNExamples/TNNCameraPreviewController/TNNViewModel/TNNFaceDetectAlignerViewModel.mm # examples/ios/TNNExamples/TNNYoutuFaceAlignController/TNNYoutuFaceAlignController.mm
* upstream/master: (31 commits) [ARM] fix reduce l2 layer error (#527) Patch rknpu (#528) [OPENCL] add opencl code gen to make stage (#521) Feature issue 475 (#517) [ARM][BUG] fix int8 dwconv kernel shape error (#514) [OPENCL] change opencl force fp32 to precision mode && fix layer (#522) [iOS] fix iPhone and simulator arch conflicts with Xcode12 (#523) Set name and type of Reformat LayerParam (#519) [DEVICE][OPENCL] 优化3*3卷积和非对称卷积 (#515) [iOS] fix tnn iOS&maxOS build error (#508) [DEV][UPD] 1. Int8Reformat -> Reformat; [ONNX][BUG] fix pool fusion bug (#500) Enhance warpaffine nearest (#501) [OPENCL] fix chinese comments (#493) Feature mat make border (#491) Enhance arm int8 (#486) [NPU][BUG] fix comiple error due to api change Npu fp16 fix (#488) Feature fp16 workflow (#482) Opencl reduce softmax opt (#443) ... # Conflicts: # platforms/ios/tnn.xcodeproj/project.pbxproj
* upstream/master: Hotfix opencl select (#555) Eff opt (#554) [OPENCL] optimize pooling with fine-grained parallelsim (#553) create issue templates (#549) [OPENCL] fix fp16 select with short condition (#544) [Metal] enable CPU N8UC4 Mat in metal ConvertFromMat (#543) Add fuse SpaceToDepth and DepthToSpace (#542) [TOOL][ADD] add output name param support (#537) [CONVERTER][BUG] fix the bug of fuse conv (#529) [UPD] update iOS&macos building scripts, check building errors (#531) [Metal] fix reshape out-of-bound access bug (#496) Feature demo hairsegmentation (#530) [Metal] fix bugs in Copy NCHW_FLAOT mat from metal to CPU (#472) support SpaceToDepth and DepthToSpace Operator (#526) update onnx coreml convert tool (#467)
* upstream/master: (39 commits) Feature arm fp16 op (#588) [QUANTIZED][BUG]1. add QuantizedUpsample(layer_type.cc); (#644) support empty mat (#603) [RKNPU][BUG] fix memleak in network init (#640) [RKNPU][ADD] add rkmodel cache interface (#633) Fix issue 599 (#608) fix abstract_layer_acc.cc shadowed variable (#617) Fix sign compare warning (#627) [OPENCL] fix reduce multi axis kernel (#614) Fix metal convacc selection (#609) [ONNX2TNN][BUG] 1. fic onnx2tnn convolution input channel; (#611) [QUANT][BUG] fix per tensor quantconcat error (#607) [CONVERTER][BUG] 1. fix Issue #604; (#605) Fix remove squeeze (#571) Fix issue #566 (#570) Fix typo. (#594) [BUG] fix layer resource count error when packing model (#592) Fix typo. (#590) [OPENCL][FIX]修复IMAC opencl 1.2 兼容错误导致无法运行unit test问题 (#593) Feature quant upsample (#589) ... # Conflicts: # source/tnn/core/default_network.cc # source/tnn/device/cpu/cpu_context.h # source/tnn/device/opencl/opencl_context.h
Co-authored-by: tiankai <tiankai@tencent.com>
* [ARM][BUG] fix upsample cubic openmp too many args * [ARM] fix blob converter unit test on a32 * [ARM] modify arm82 compile option Co-authored-by: seanxcwang <seanxcwang@tencent.com> Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com>
Co-authored-by: tiankai <tiankai@tencent.com>
* [RKNPU] new ddk cache interface * [RKNPU] add tnn rk cache model type * [RKNPU] refactor dupicate init cache graph code
* [CONVERT2TNN][UPD]1. finish issue 695; Co-authored-by: tiankai <tiankai@tencent.com>
* fix some failures in X86 UnitTest * opt openvino binary layer impl * add openvino argmax layer builder * opt openvino unary layer builder impl * [DOC] add MatConvertParam description * [DOC] fix link error * [CONVERTER2TNN][UPD] 1. default do not clean build directory; (#694) Co-authored-by: tiankai <tiankai@tencent.com> * [ARM][BUG] fix upsample cubic openmp too many args (#692) * [ARM][BUG] fix upsample cubic openmp too many args * [ARM] fix blob converter unit test on a32 * [ARM] modify arm82 compile option Co-authored-by: seanxcwang <seanxcwang@tencent.com> Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com> * [CONVERTER2TNN][BUG]1.fix bug for convert2tnn build.sh (#700) Co-authored-by: tiankai <tiankai@tencent.com> * [TOOLS][BUG] fix model checker bug when only check output * [ARM][BUG] fp16 reformat support multi-inputs * [DOC] update model check doc and model_check cmd help info * [TOOLS][CHG] fix model check info error and modify some return value Co-authored-by: Dandiding <Dandiding@tencent.com> Co-authored-by: neiltian <65950677+neiltian-tencent@users.noreply.github.com> Co-authored-by: quinnrong94 <quinnrong@tencent.com> Co-authored-by: lucasktian <lucasktian@tencent.com> Co-authored-by: tiankai <tiankai@tencent.com> Co-authored-by: quinnrong94 <67782915+quinnrong94@users.noreply.github.com> Co-authored-by: seanxcwang <seanxcwang@tencent.com> Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com>
* upstream/master: (106 commits) Stable v0.3 merge master (#709) Dev issue 695 (#708) Patch rknpu cache (#707) [CONVERTER2TNN][BUG]1.fix bug for convert2tnn build.sh (#700) [ARM][BUG] fix upsample cubic openmp too many args (#692) [CONVERTER2TNN][UPD] 1. default do not clean build directory; (#694) Feature fp16 arm32 (#690) pose demo (#665) [TOOLS][ADD] Model checker support Huawei NPU (#684) Fix input name and the bug of optimization model (#674) [CUDA][FIX] fix merge error and support cuda scale [CUDA]mdf cuda reduce_l2 [NPU][ADD] 添加NPU ResizeBilinearV2算子的convert (#646) fix github action failures fix ctest running issue fix gcc7.5 compiling issue [CUDA]delete redundant md5 [SCRIPTS][BUILD] close treat warning as error [SCRIPTS][BUILD] set openvino build type release Feature opencl kernel opt merge (#672) ... # Conflicts: # include/tnn/utils/half_utils.h # platforms/ios/tnn.xcodeproj/project.pbxproj # source/tnn/device/arm/acc/Half8.h # source/tnn/device/arm/acc/arm_relu6_layer_acc.cc # source/tnn/device/arm/acc/arm_sigmoid_layer_acc.cc # source/tnn/device/arm/acc/arm_unary_layer_acc.h # source/tnn/device/arm/acc/compute/compute.cc # source/tnn/device/arm/arm_blob_converter.cc # source/tnn/device/arm/arm_util.cc # source/tnn/utils/cpu_utils.cc
devandong
approved these changes
Jan 8, 2021
lnmdlong
added a commit
that referenced
this pull request
Jan 15, 2021
* fix some failures in X86 UnitTest * opt openvino binary layer impl * add openvino argmax layer builder * opt openvino unary layer builder impl * [DOC] add MatConvertParam description * [DOC] fix link error * [CONVERTER2TNN][UPD] 1. default do not clean build directory; (#694) Co-authored-by: tiankai <tiankai@tencent.com> * [ARM][BUG] fix upsample cubic openmp too many args (#692) * [ARM][BUG] fix upsample cubic openmp too many args * [ARM] fix blob converter unit test on a32 * [ARM] modify arm82 compile option Co-authored-by: seanxcwang <seanxcwang@tencent.com> Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com> * [CONVERTER2TNN][BUG]1.fix bug for convert2tnn build.sh (#700) Co-authored-by: tiankai <tiankai@tencent.com> * [TOOLS][BUG] fix model checker bug when only check output * [ARM][BUG] fp16 reformat support multi-inputs * [DOC] update model check doc and model_check cmd help info * [TOOLS][CHG] fix model check info error and modify some return value * [ANDROID] object ssd demo fix force to use fp32 on huawei npu (#712) * [BENCHMARK][FIX] support android 64bit force run 32bit (#716) * Patch benchmark tool java (#719) * [BENCHMARK][FIX] support android 64bit force run 32bit * add java error msg deal Co-authored-by: neiltian <neiltian@tencent.com> * [SCRIPTS][FIX] 发布代码关闭unit test,unit test会暴露符号,并且会引入generate resource 逻辑 (#703) * [EXAMPLE][FIX] remove unused cl include * [UPD]update model type error message (#725) * 修改了 Tensorflow 模型转换时 "-in" 参数的格式 (#717) * [CONVERT2TNN][UPD] 1.refactor convert tensorflow model(input_name[1,128,128,3] -> input_name:1,128,128,3); * [CONVERT2TNN][UPD] 1. update help message; Co-authored-by: Dandiding <Dandiding@tencent.com> Co-authored-by: quinnrong94 <quinnrong@tencent.com> Co-authored-by: lucasktian <lucasktian@tencent.com> Co-authored-by: tiankai <tiankai@tencent.com> Co-authored-by: quinnrong94 <67782915+quinnrong94@users.noreply.github.com> Co-authored-by: seanxcwang <seanxcwang@tencent.com> Co-authored-by: seanxcwang <66675860+seanxcwang@users.noreply.github.com> Co-authored-by: shaundai <shaundai@tencent.com> Co-authored-by: ShaunDai <66760945+shaundai-tencent@users.noreply.github.com> Co-authored-by: lnmdlong <lnmdlong@hotmail.com> Co-authored-by: darrenyao87 <62542779+darrenyao87@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.