Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got OrtException when deploy ONNX model to Android #52

Closed
zxixia opened this issue Feb 9, 2023 · 11 comments
Closed

Got OrtException when deploy ONNX model to Android #52

zxixia opened this issue Feb 9, 2023 · 11 comments

Comments

@zxixia
Copy link

zxixia commented Feb 9, 2023

Hello your team,

I followed the guide here: https://github.com/OFA-Sys/Chinese-CLIP/blob/master/deployment.md and success get the ONNX model that list below:
vit-b-16.txt.fp32.onnx 391 MB
vit-b-16.txt.fp16.onnx 2.27 MB
vit-b-16.img.fp32.onnx 332 MB
vit-b-16.img.fp16.onnx 3.34 MB
vit-b-16.txt.fp16.onnx.extra_file 194 MB
vit-b-16.img.fp16.onnx.extra_file 164 MB

But when I deployed the img model("vit-b-16.img.fp32.onnx") to Android, I just met the follow exception:
ai.onnxruntime.OrtException: Error code - ORT_INVALID_GRAPH - message: This is an invalid model. Error in Node:/visual/Unsqueeze : Node (/visual/Unsqueeze) has input size 2 not in range [min=1, max=1]. at ai.onnxruntime.OrtSession.createSession(Native Method) at ai.onnxruntime.OrtSession.<init>(OrtSession.java:82) at ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:206) at ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:179)

I just a newbee here, can you team give some suggestions to overcome this bug?

Thanks so much.

@yangapku
Copy link
Member

yangapku commented Feb 9, 2023

您好,我们对于onnx在Android的部署也不是很熟悉 😢 初步怀疑是onnxruntime的版本问题,请问可以对齐下我们文档中的版本号,重新试试看吗?

@zxixia
Copy link
Author

zxixia commented Feb 10, 2023

您好,

谢谢回复哈。

这是我的环境,是和文档中的对齐,但是还是报同样的错误的。不知方便的话,能否通过网盘分享下你们那边生成的ONNX模型的。谢谢哈

Package Version


certifi 2022.12.7
charset-normalizer 3.0.1
colorama 0.4.6
coloredlogs 15.0.1
filelock 3.9.0
flatbuffers 23.1.21
huggingface-hub 0.12.0
humanfriendly 10.0
idna 3.4
joblib 1.2.0
lmdb 1.3.0
mpmath 1.2.1
numpy 1.24.2
onnx 1.13.0
onnxconverter-common 1.13.0
onnxmltools 1.11.1
onnxruntime-gpu 1.13.1
packaging 23.0
Pillow 9.4.0
pip 22.2.2
protobuf 3.20.3
pyreadline3 3.4.1
PyYAML 6.0
requests 2.28.2
scikit-learn 1.1.1
scipy 1.10.0
setuptools 63.2.0
six 1.16.0
skl2onnx 1.13
sympy 1.11.1
threadpoolctl 3.1.0
timm 0.6.12
torch 1.13.1
torchvision 0.14.1
tqdm 4.64.1
typing_extensions 4.4.0
urllib3 1.26.14

@ZhangJianwei0311
Copy link

麻烦尝试下转换onnx成功后,测试一下你转成的模型onnx_runtime能不能运行成功(参考cn_clip/deploy/speed_benmark.py)。区分下现在的问题是转换onnx的问题还是把onnx部署到android的问题。

@zxixia
Copy link
Author

zxixia commented Feb 10, 2023

你好,

PC是可以运行ONNX模型的。但是我部署在Android上报错是这样的:
Error in Node:/visual/Unsqueeze : Node (/visual/Unsqueeze) has input size 2 not in range [min=1, max=1].

问下,这个Node:/visual/Unsqueeze是模型里面的一个节点吧?

@zxixia
Copy link
Author

zxixia commented Feb 10, 2023

我在模型转换的过程中,看到这样的Log:
%/visual/Unsqueeze_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[onnx_name="/visual/Unsqueeze"](%/visual/Constant_1_output_0, %onnx::Unsqueeze_359), scope: cn_clip.clip.model.CLIP::/cn_clip.clip.model.VisualTransformer::visual

@wh0x
Copy link

wh0x commented Feb 10, 2023

你好,请问你现在成功部署了吗

@zxixia
Copy link
Author

zxixia commented Feb 10, 2023

PC可以运行,但是Android上无法运行

@wh0x
Copy link

wh0x commented Feb 10, 2023

PC可以运行,但是Android上无法运行
可以试试其他模型RN50?

@zxixia
Copy link
Author

zxixia commented Feb 10, 2023

Android端,txt模型碰到如下错误:

Error in Node:/bert/Unsqueeze : Node (/bert/Unsqueeze) has input size 2 not in range [min=1, max=1].

@zxixia
Copy link
Author

zxixia commented Feb 10, 2023

是Android端的环境的问题,我将ONNX 运行时环境更改如下就好了:

implementation group: 'com.microsoft.onnxruntime', name: 'onnxruntime-android', version: '1.13.1'

@ldfandian
Copy link

Hello your team,

I followed the guide here: https://github.com/OFA-Sys/Chinese-CLIP/blob/master/deployment.md and success get the ONNX model that list below: vit-b-16.txt.fp32.onnx 391 MB vit-b-16.txt.fp16.onnx 2.27 MB vit-b-16.img.fp32.onnx 332 MB vit-b-16.img.fp16.onnx 3.34 MB vit-b-16.txt.fp16.onnx.extra_file 194 MB vit-b-16.img.fp16.onnx.extra_file 164 MB

But when I deployed the img model("vit-b-16.img.fp32.onnx") to Android, I just met the follow exception: ai.onnxruntime.OrtException: Error code - ORT_INVALID_GRAPH - message: This is an invalid model. Error in Node:/visual/Unsqueeze : Node (/visual/Unsqueeze) has input size 2 not in range [min=1, max=1]. at ai.onnxruntime.OrtSession.createSession(Native Method) at ai.onnxruntime.OrtSession.<init>(OrtSession.java:82) at ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:206) at ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:179)

I just a newbee here, can you team give some suggestions to overcome this bug?

Thanks so much.

请问一下,在android上使用onnx model,一张图片处理下来的latency平均是多少ms?
咱们对比过android nnapi的效果不?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants