-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Gridsample] num_input_elements != num_output_elements (2223936 != 2235392)Node number 2 (RESHAPE) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0. #308
Comments
To me, it looks like a bug in the TensorFlow runtime. The place you are focusing on is where you ignored the error message. The error message says there is a problem with operation Does the model look broken? To me, it looks like the error message is lying. import os
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=Warning)
warnings.simplefilter(action='ignore', category=DeprecationWarning)
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=RuntimeWarning)
import random
random.seed(0)
import numpy as np
np.set_printoptions(
precision=6,
floatmode='fixed',
suppress=True,
edgeitems=3,
linewidth=100,
)
np.random.seed(0)
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.lite.python import interpreter as iw
TFLITE_PATH = 'gird_sample_float32.tflite'
interpreter = iw.Interpreter(
model_path=TFLITE_PATH,
num_threads=4,
)
input_details = interpreter.get_input_details()
input_shape_1 = input_details[0]['shape']
input_shape_2 = input_details[1]['shape']
output_details = interpreter.get_output_details()
interpreter.allocate_tensors() |
I still get the same error even when I use this script. Is it because the error is with TensorFlow, and there is no current solution to fix it? |
I only reproduced the error because you did not post the code to reproduce it. Did you seriously look at the images I posted? |
Yes, I'm seriously check
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '4'
import sys
sys.path.append('/data/ojw/convert')
import time
import pickle
from os import path
dir = '/data/ojw/convert/convert/split/grid_sample'
TFLITE_PATH = path.join(dir, 'tflite/gird_sample_float32.tflite')
with open(path.join(dir, 'input.pkl'), 'rb') as f:
ipt = pickle.load(f)
with open(path.join(dir, 'output.pkl'), 'rb') as f:
opt = pickle.load(f)
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=TFLITE_PATH, num_threads=4,)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(
input_details[0]['index'],
ipt[0].cpu().detach().numpy()
)
interpreter.set_tensor(
input_details[1]['index'],
ipt[1].cpu().detach().numpy()
)
start = time.time()
interpreter.invoke()
print(time.time() - start) |
If so, then you see no problem with the structure of the model? Isn't the TFLite runtime error message lying?
The number |
Does the sentence "Isn't the TFLite runtime error message lying?" mean that the error message is incorrect? I am asking to confirm if the message is properly conveyed in English. If that's the case, it seems to me that the error message is lying. Then, Would it be necessary for me to raise an issue directly with TFLite runtime in order to resolve the error? |
That's right. I assure you. It is the same conclusion for everyone. This error has nothing to do with the overall structure of the model. It is a bug in |
Thank you for your answer. In tomorrow, I'll raise an issue and get back to you. |
The only thing runtime users like us can do about this symptom is to locate the bug at runtime and submit a pull request, or submit an issue and wait a year or more. |
Very interesting bug. It seems that dimensions other than batch size are degenerating. 8×33×72×117=2223936 |
It is odd that a bug in the TFLite runtime is addressed on the onnx2tf side. I would never implement some trick if the structure of the model is broken, but not if it is not broken. You should probably convert the |
I will close this issue, and I will reopen it when I receive a response from TensorFlow. |
Issue Type
Others
onnx2tf version number
1.9.1
onnx version number
1.13.1
tensorflow version number
2.12.0
Download URL for ONNX
https://drive.google.com/file/d/1UZPbL5h6GJUwZTHPpab54TFeJNuaJRfS/view?usp=sharing
Parameter Replacement JSON
None
Description
I have noticed that an unknown error occurs when using the Gridsample function from Torch. So, I'm trying to use the custom Gridsample function that you mentioned in #274.
However, when I tried to infer the tflite model that includes the converted custom Gridsample function, the following error occurred.
Is this bug?
The text was updated successfully, but these errors were encountered: