Skip to content
This repository has been archived by the owner on May 12, 2024. It is now read-only.

Order of input channels switched on ONNX #26

Closed
hovnatan opened this issue Feb 7, 2022 · 2 comments
Closed

Order of input channels switched on ONNX #26

hovnatan opened this issue Feb 7, 2022 · 2 comments

Comments

@hovnatan
Copy link

hovnatan commented Feb 7, 2022

Issue Type

Others

OS

Mac OS

OS architecture

aarch64

Programming Language

C++

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_detection/face_detection_short_range.tflite

Convert Script

tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb
tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_onnx --onnx_opset 9

Description

The input to the TFlite model is 1x128x128x3 but it is switched to 1x3x128x128 on ONNX output.

Relevant Log Output

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
inputs:
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'input',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 128, 128,   3], dtype=int32),
 'shape_signature': array([  1, 128, 128,   3], dtype=int32),
 'sparsity_parameters': {}}
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 175,
 'name': 'regressors',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 896,  16], dtype=int32),
 'shape_signature': array([  1, 896,  16], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 174,
 'name': 'classificators',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 896,   1], dtype=int32),
 'shape_signature': array([  1, 896,   1], dtype=int32),
 'sparsity_parameters': {}}
ONNX convertion started =============================================================

ONNX convertion complete! - saved_model/model_float32.onnx

Source code for simple inference testing code

No response

@PINTO0309
Copy link
Owner

PINTO0309 commented Feb 7, 2022

@hovnatan
Is your desired configuration the one in the image below?
image

@PINTO0309
Copy link
Owner

PINTO0309 commented Feb 7, 2022

--disable_onnx_nchw_conversion has been added. commit: ad2146f 00d714d
https://github.com/PINTO0309/tflite2tensorflow/releases/tag/v1.18.4

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants