New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting [java.lang.IllegalArgumentException: Internal error: Error applying delegate] error while applying GPU Delegate #65648
Comments
|
Thank you for your reply. I have tried this code above. It does catch the exception. However, the log convert from 'Error' to 'Warning":
The problem seems to still exist. |
Hi @walker-ai , I have been trying to replicate your issue but i don't have autocomplete.tflite file, can you provide me the autocomplete.tflite file or the script you used to create it . |
Thank you for your reply. The autocomplete.tflite was converted from gpt2_lm.jit_compile = False
converter = tf.lite.TFLiteConverter.from_concrete_functions(
[concrete_func],
gpt2_lm)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TFLite ops
tf.lite.OpsSet.SELECT_TF_OPS, # enable TF ops
]
converter.allow_custom_ops = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.experimental_select_user_tf_ops = [
"UnsortedSegmentJoin",
"UpperBound"
]
converter._experimental_guarantee_all_funcs_one_use = True
quant_generate_tflite = converter.convert()
run_inference("I'm enjoying a", quant_generate_tflite)
with open('quantized_gpt2.tflite', 'wb') as f:
f.write(quant_generate_tflite) # autocomplete.tflite |
Hi @walker-ai , I followed the same example while creating the autocomplete.tflite file , for some reason the "generate" function under the decorator "tf.function" gave me numpy related error . I am trying to debug that, but if you have a dummy autocomplete.tflite file that you can provide, it will save me some time. |
Sorry, I haven't provided more details. You can open the colab and run through the notebook to get the |
Hi @walker-ai , Thank you for the model file. I replicated your issue on my emulator with android 10 and it also crashed when i tried running the model on GPU. I will try out some other tensorflow examples which make use of gpu acceleration and i will get back to you . |
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
source
TensorFlow version
tf 2.12.0
Custom code
Yes
OS platform and distribution
Linux Ubuntu 20.04
Mobile device
android 10.0 emulator
Python version
3.9
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
CUDA 11.4 cuDNN 8
GPU model and memory
No response
Current behavior?
Sorry about the type 'bug', I have no idea to solve it, I don' mean to offend that.
I am using the official tf example named generative_ai which from:
https://github.com/tensorflow/examples/tree/master/lite/examples/generative_ai/android.
And I tried to add GPU delegate to the interpreter, and my dependencies's version are:
The test case is:
I have only add these lines(Added
wyt
commented section).But the app cannot be installed rightly. It will crash.
But when I delete the param
options
tointerpreter = Interpreter(it)
, it works well.It stands to reason that gpu delegate should not be supported.
Standalone code to reproduce the issue
Relevant log output
The text was updated successfully, but these errors were encountered: