Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google.protobuf.message.DecodeError: Error parsing message with type 'tensorflow.GraphDef' #55

Closed
howardgriffin opened this issue Mar 1, 2022 · 3 comments

Comments

@howardgriffin
Copy link

when I use 'nn-meter predict --predictor cortexA76cpu_tflite21 --predictor-version 1.0 --tensorflow mobilenetv3small_0.onnx ' or 'nn-meter predict --predictor cortexA76cpu_tflite21 --tensorflow mobilenetv3small_0.json' in my command line, this error occured. Any suggestions?

Traceback (most recent call last):
File "/opt/conda/bin/nn-meter", line 8, in
sys.exit(nn_meter_cli())
File "/opt/conda/lib/python3.7/site-packages/nn_meter/nn_meter_cli.py", line 182, in nn_meter_cli
args.func(args)
File "/opt/conda/lib/python3.7/site-packages/nn_meter/nn_meter_cli.py", line 54, in apply_latency_predictor_cli
latency = predictor.predict(model, model_type) # in unit of ms
File "/opt/conda/lib/python3.7/site-packages/nn_meter/predictor/nn_meter_predictor.py", line 102, in predict
graph = model_file_to_graph(model, model_type, input_shape, apply_nni=apply_nni)
File "/opt/conda/lib/python3.7/site-packages/nn_meter/ir_converter/utils.py", line 41, in model_file_to_graph
converter = FrozenPbConverter(filename)
File "/opt/conda/lib/python3.7/site-packages/nn_meter/ir_converter/frozenpb_converter/frozenpb_converter.py", line 15, in init
parser = FrozenPbParser(file_name)
File "/opt/conda/lib/python3.7/site-packages/nn_meter/ir_converter/frozenpb_converter/frozenpb_parser.py", line 19, in init
graph.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message with type 'tensorflow.GraphDef'

@JiahangXu
Copy link
Collaborator

Hi, thanks for using nn-Meter! In command line, the file format need to be specified before the file name. You can try ... --onnx mobilenetv3small_0.onnx instead of ... --tensorflow mobilenetv3small_0.onnx. The whole command line should be nn-meter predict --predictor cortexA76cpu_tflite21 --predictor-version 1.0 --onnx mobilenetv3small_0.onnx. Similarly, the prediction of json file should be nn-meter predict --predictor cortexA76cpu_tflite21 --nn-meter-ir mobilenetv3small_0.json.

We have noticed that the example command in README is wrong. We're sorry for that. The README command will be fixed soon.

@howardgriffin
Copy link
Author

It does.
And I have another question: the json model and the pb model have the same latency, but the onnx model seems to have different latency from the json and pb model of the same hardware. How can this be explained?

@JiahangXu
Copy link
Collaborator

Thanks for raising this question! It's a bug in onnx model parser and we have fixed the bug in PR #60. Now the onnx model latency is 11.842ms. The slight gap between onnx model and json & pb model is caused by the different implementation of onnx model and pb model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants