Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to download or install .so file for tflite conversion with gpu delegate #61743

Open
Alwaysadil opened this issue Aug 29, 2023 · 55 comments
Open
Assignees
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter TFLiteGpuDelegate TFLite Gpu delegate issue

Comments

@Alwaysadil
Copy link

1. System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • TensorFlow installation (pip package or built from source):
  • TensorFlow library (version, if pip package or github SHA, if built from source):

2. Code

Provide code to help us reproduce your issues using one of the following options:

Option A: Reference colab notebooks

  1. Reference TensorFlow Model Colab: Demonstrate how to build your TF model.
  2. Reference TensorFlow Lite Model Colab: Demonstrate how to convert your TF model to a TF Lite model (with quantization, if used) and run TFLite Inference (if possible).
(You can paste links or attach files by dragging & dropping them below)
- Provide links to your updated versions of the above two colab notebooks.
- Provide links to your TensorFlow model and (optionally) TensorFlow Lite Model.

Option B: Paste your code here or provide a link to a custom end-to-end colab

(You can paste links or attach files by dragging & dropping them below)
- Include code to invoke the TFLite Converter Python API and the errors.
- Provide links to your TensorFlow model and (optionally) TensorFlow Lite Model.

3. Failure after conversion

If the conversion is successful, but the generated model is wrong, then state what is wrong:

  • Model produces wrong results and/or has lesser accuracy.
  • Model produces correct results, but it is slower than expected.

4. (optional) RNN conversion support

If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.

5. (optional) Any other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

@Alwaysadil Alwaysadil added the TFLiteConverter For issues related to TFLite converter label Aug 29, 2023
@Alwaysadil
Copy link
Author

concrete_func = model_beam_search.call.get_concrete_function()# Create a TFLite converter and set the delegate to TfLiteGpuDelegateconverter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], model_beam_search)converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.target_spec.supported_types = [tf.float16]# Replace TfLiteFlexDelegate with TfLiteGpuDelegategpu_delegate = tf.lite.experimental.load_delegate('libtensorflowlite_gpu_delegate.so')converter.experimental_new_converter = True  # This flag is needed for using the experimental converterconverter.experimental_new_quantizer = False  # You can enable quantization if neededconverter.experimental_enable_resource_variable = False  # You can enable resource variables if neededconverter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, gpu_delegate]tflite_model = converter.convert()# Save the TFLite model to a filewith open('testing_gpu.tflite', 'wb') as f:    f.write(tflite_model)

@sushreebarsa sushreebarsa added the TFLiteGpuDelegate TFLite Gpu delegate issue label Aug 30, 2023
@sushreebarsa
Copy link
Contributor

@Alwaysadil Could you please let us know which TF version you are using here and refer to this GPU delegate guide for more information on this. Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label Aug 30, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Aug 30, 2023

@sushreebarsa I am using 2.13.0

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Aug 30, 2023
@sushreebarsa sushreebarsa added the TF 2.13 For issues related to Tensorflow 2.13 label Aug 30, 2023
@sushreebarsa
Copy link
Contributor

@Alwaysadil Thank you for your quick response!
Could you please let us know if the GPU delegate guide helped you anyway. Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label Aug 30, 2023
@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Aug 30, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Aug 30, 2023

@sushreebarsa yes it was helpful for me that u shared GPU delegate documentation, thank you for sharing that ,but i didn't understood how to get .so file libtensorflowlite_gpu_delegate.so

@pjpratik
Copy link
Contributor

Hi @Alwaysadil

To get libtensorflowlite_gpu_delegate.so, you have to build the .so file using following steps

  1. Install bazel 5.3.0
  2. git clone https://github.com/tensorflow/tensorflow.git
  3. cd tensorflow
  4. git checkout r2.13
  5. ./configure (yes for android build and provide the NDK and SDK versions)
  6. Run bazel build --config android_arm64 tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so

Please refer to this documentation for reference.

Thanks.

@pjpratik pjpratik added the stat:awaiting response Status - Awaiting response from author label Aug 30, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Aug 30, 2023

@pjpratik could you please provide me the google colab notebook with code to get .so file please? i'm getting errors in google colab (please check my google colab )

https://colab.research.google.com/drive/1aaX-Dm_TySAWWWyR1S6UEiPc5EB9kPjQ#scrollTo=nhrzFEC7GDXr

like this WARNING: Target pattern parsing failed.
Loading: 0 packages loaded
currently loading: tensorflow/lite/delegates/gpu
ERROR: no such package '@local_config_tensorrt//': Repository command failed
Could not find any NvInferVersion.h matching version '' in any subdirectory:
''
'include'
'include/cuda'
'include/*-linux-gnu'
'extras/CUPTI/include'
'include/cuda/CUPTI'
'local/cuda/extras/CUPTI/include'
of:
'/lib'
'/lib/x86_64-linux-gnu'
'/lib32'
'/usr'
'/usr/local/cuda'
'/usr/local/cuda/targets/x86_64-linux/lib'
'/usr/local/lib'
Analyzing: 0 targets (0 packages loaded)
currently loading: tensorflow/lite/delegates/gpu
INFO: Elapsed time: 83.761s
Analyzing: 0 targets (0 packages loaded)
currently loading: tensorflow/lite/delegates/gpu
INFO: 0 processes.
Analyzing: 0 targets (0 packages loaded)
currently loading: tensorflow/lite/delegates/gpu
FAILED: Build did NOT complete successfully (0 packages loaded)
currently loading: tensorflow/lite/delegates/gpu
FAILED: Build did NOT complete successfully (0 packages loaded)
currently loading: tensorflow/lite/delegates/gpu
Fetching @local_config_rocm; fetching

plese help me

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Aug 30, 2023
@pjpratik
Copy link
Contributor

Hi @Alwaysadil

The colab shared is currently not accessible. Could you please provide the steps you have followed?

You can follow these instructions along with links to download for setting up the configurations in your local machine. Please let us know if you are facing any issue after the following the steps.

Thanks.

@pjpratik pjpratik added the stat:awaiting response Status - Awaiting response from author label Aug 31, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Aug 31, 2023

Hi @pjpratik thank you for your response.Could you please check my colab notebook it will now accessible,i'm unable to downloading the .so file please kindly go through this colab link https://colab.research.google.com/drive/1aaX-Dm_TySAWWWyR1S6UEiPc5EB9kPjQ#scrollTo=nhrzFEC7GDXr

i want to load this with tf
import tensorflow as tf
delegate = tf.lite.experimental.load_delegate('libtensorflowlite_gpu_delegate.so')#with this we can get faster predictions of tflite model

please help me

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Aug 31, 2023
@pjpratik
Copy link
Contributor

Hi @Alwaysadil

Thanks for sharing the code. I can see that the android ndk and sdk tools have not been configured.

The Android NDK is required to build the native (C/C++) TensorFlow Lite code. The current recommended version is 21e, which may be found here.
The Android SDK and build tools may be obtained here.

Run the ./configure script in the root TensorFlow checkout directory, and answer "Yes" when the script asks to interactively configure the ./WORKSPACE for Android builds.

Also, you can this prebuilt .so file and see if it works for your case.
libtensorflowlite_gpu_delegate.so.zip

Thanks.

@pjpratik pjpratik added the stat:awaiting response Status - Awaiting response from author label Aug 31, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Sep 1, 2023

Hi @pjpratik
Thanks for you response it means alot ,

ould you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y
Searching for NDK and SDK installations.

Please specify the home path of the Android NDK to use. [Default is /root/Android/Sdk/ndk-bundle]:

The path /root/Android/Sdk/ndk-bundle or its child file "source.properties" does not exist.
Please specify the home path of the Android NDK to use. [Default is /root/Android/Sdk/ndk-bundle]: /content/android-ndk-r21e

Please specify the (min) Android NDK API level to use. [Available levels: ['16', '17', '18', '19', '21', '22', '23', '24', '26', '27', '28', '29', '30']] [Default is 26]:

Please specify the home path of the Android SDK to use. [Default is /root/Android/Sdk]: /root/android-sdk

Please specify the Android SDK API level to use. [Available levels: ['30']] [Default is 30]:

Please specify an Android build tools version to use. [Available versions: ['30.0.3']] [Default is 30.0.3]:

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=mkl_aarch64 # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
--config=monolithic # Config for mostly static monolithic build.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v1 # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
--config=nogcp # Disable GCP support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
https://colab.research.google.com/drive/1aaX-Dm_TySAWWWyR1S6UEiPc5EB9kPjQ#scrollTo=nhrzFEC7GDXr
but the .so file was not visible ,i downloaded it by giving path like this see above colab link the way i did
#checking the path
import os

path_to_check = '/content/tensorflow/bazel-bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so'

if os.path.exists(path_to_check):
print("yes")
else:
print("no")
it was printing yes

#downloading the .so file
from google.colab import files

source_path = '/content/tensorflow/bazel-bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so'

if os.path.exists(source_path):
files.download(source_path)
else:
print("Source file does not exist.")

i have successfully downloaded the libtensorflowlite_gpu_delegate.so see this below link to get newly built .so file https://drive.google.com/file/d/1848HQ4ExO72zkTdQC-yr7rrc7kvVfeeE/view?usp=sharing

while loading this .so in new colab notebook file i am getting this below error for both newly built and you shared prebuild .so filecheck this https://colab.research.google.com/drive/1jAaFDTwqRWuISD0nA6OF9d0eSq39t6w1?usp=sharing

import tensorflow as tf
delegate = tf.lite.experimental.load_delegate(''/content/drive/MyDrive/delegate/libtensorflowlite_gpu_delegate.so')#with this we ca get faster predictions

OSError Traceback (most recent call last)
in <cell line: 2>()
1 import tensorflow as tf
----> 2 delegate = tf.lite.experimental.load_delegate(''/content/drive/MyDrive/delegate/libtensorflowlite_gpu_delegate.so')#with this we ca get faster predictions

3 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/interpreter.py in load_delegate(library, options)
164 """
165 try:
--> 166 delegate = Delegate(library, options)
167 except ValueError as e:
168 raise ValueError('Failed to load delegate from {}\n{}'.format(

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/interpreter.py in init(self, library, options)
71 'due to missing immediate reference counting.')
72
---> 73 self._library = ctypes.pydll.LoadLibrary(library)
74 self._library.tflite_plugin_create_delegate.argtypes = [
75 ctypes.POINTER(ctypes.c_char_p),

/usr/lib/python3.10/ctypes/init.py in LoadLibrary(self, name)
450
451 def LoadLibrary(self, name):
--> 452 return self._dlltype(name)
453
454 class_getitem = classmethod(_types.GenericAlias)

/usr/lib/python3.10/ctypes/init.py in init(self, name, mode, handle, use_errno, use_last_error, winmode)
372
373 if handle is None:
--> 374 self._handle = _dlopen(self._name, mode)
375 else:
376 self._handle = handle

OSError: libtensorflowlite_gpu_delegate.so: cannot open shared object file: No such file or directory

but if i check the path
import os

path_to_check = '/content/drive/MyDrive/delegate/libtensorflowlite_gpu_delegate.so'

if os.path.exists(path_to_check):
print("yes")
else:
print("no")

#it was printing "yes"

please help me to overcome this issue

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Sep 1, 2023
@pjpratik
Copy link
Contributor

pjpratik commented Sep 1, 2023

Hi @Alwaysadil

Apologies for the confusion. The delegate can be loaded only if it matches the target architecture. The colab ships with x86_64(test it out using !uname -a), hence we need to build the bazel build --config android_x86_64 tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so and then try to load the delegate.

Thanks.

@pjpratik pjpratik added the stat:awaiting response Status - Awaiting response from author label Sep 1, 2023
@Alwaysadil
Copy link
Author

Hi @pjpratik i didn't get what u said,i tried in local system terminal too still same error occurring, could you please help me to overcome this issue

@pkgoogle
Copy link

Hi @Alwaysadil, help me understand your current state. You are able to use GPU/NNAPI delegates but they aren't improving the performance? If so, can you show/explain the magnitude of performance difference when using those things?

@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Sep 27, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Sep 28, 2023

Hi @pkgoogle thanks for your response
i'm unable to applying GPU/NNAPI Delegates that's the issue to me

these are my imports

import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Sep 28, 2023
@Alwaysadil
Copy link
Author

@pkgoogle
Copy link

Hi @Alwaysadil, I don't have permissions :), I think you can just drag and drop the file(s) into github as well.

@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Sep 28, 2023
@Alwaysadil
Copy link
Author

Hi @pkgoogle ,thanks for your response
please check this below link
https://drive.google.com/file/d/1jeKKxmhWoFxKHo3hk8yQF-YGWQ4BdJ5A/view?usp=drive_link

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Sep 29, 2023
@pkgoogle
Copy link

Hi @Alwaysadil, I was able to run your project on a Pixel 6 Pro API 34 emulator, it seemed to work... can you direct me to how I may see the issue? Thanks for the info/help!

@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Sep 29, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Sep 30, 2023

Hi @pkgoogle thanks for your response
Please try to to run with pixel 3 API30

I want to get the output predictions were faster (10-20ms)
That was achieved with delegates
While applying the delegates i'm getting errors which i mention earlier
Please help me to achieve this

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Sep 30, 2023
@pkgoogle
Copy link

pkgoogle commented Oct 2, 2023

Hi @Alwaysadil I'm not getting the errors... how may I get the errors?

image

@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Oct 2, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Oct 3, 2023

Hi @pkgoogle
Thanks for your response
you are not getting any errors while applying GPU/NNAPI delegates??
could you please check with physical device,If you are able to applying DELEGATES
Thank you

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Oct 3, 2023
@pkgoogle
Copy link

pkgoogle commented Oct 3, 2023

Hi @Alwaysadil, can you tell me where in the project you shared you are applying GPU/NNAPI delegates? I just want to ensure I'm actually replicating your environment. I don't see it but I do not know your project very well.

@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Oct 3, 2023
@Alwaysadil
Copy link
Author

Alwaysadil commented Oct 4, 2023

Hi @pkgoogle
Thanks for your response
For GPU Delegates please check the class "QaClient.java" line no.145

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Oct 4, 2023
@pkgoogle
Copy link

pkgoogle commented Oct 4, 2023

It seems my setup does not use the GPU (info log states it's not being used, my custom code)

image

@arfaian I don't have a physical device to test this, can you please take a look? Thanks.

@pkgoogle pkgoogle added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Oct 4, 2023
@wqy123456
Copy link

Hi @pkgoogle
I'm trying to deploy libtensorflowlite_gpu_delegate.so on ubuntu20.04,but I faied by using this command:bazel build -c opt tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so --copt -DEGL_NO_X11=1

Any other info / logs

ERROR: /home/sstc/tensorflow/tensorflow/lite/delegates/gpu/BUILD:134:10: Linking tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc @bazel-out/k8-opt/bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so-2.params
/usr/bin/ld: cannot find -lnativewindow
/usr/bin/ld: cannot find -lnativewindow
collect2: error: ld returned 1 exit status
Target //tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 123.625s, Critical Path: 46.21s
INFO: 804 processes: 190 internal, 614 local.
FAILED: Build did NOT complete successfully

Can you help me look at this problem?

@wqy123456
Copy link

Hi @pkgoogle

It is the final linkage step that raise an error stating can't find -lnativewindow

@wqy123456
Copy link

wqy123456 commented Oct 18, 2023

Hi @pjpratik

I'm trying to deploy libtensorflowlite_gpu_delegate.so on ubuntu20.04,but I faied by using this command:bazel build -c opt tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so --copt -DEGL_NO_X11=1

Any other info / logs

ERROR: /home/sstc/tensorflow/tensorflow/lite/delegates/gpu/BUILD:134:10: Linking tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc @bazel-out/k8-opt/bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so-2.params
/usr/bin/ld: cannot find -lnativewindow
/usr/bin/ld: cannot find -lnativewindow
collect2: error: ld returned 1 exit status
Target //tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 123.625s, Critical Path: 46.21s
INFO: 804 processes: 190 internal, 614 local.
FAILED: Build did NOT complete successfully

It is the final linkage step that raise an error stating can't find -lnativewindow

Can you help me look at this problem?

Thanks!

@Alwaysadil
Copy link
Author

Hi @wqy123456
i'm getting errors too,i did'nt overcome that issue

@wqy123456
Copy link

Hi @Alwaysadil,thank you for your response!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter TFLiteGpuDelegate TFLite Gpu delegate issue
Projects
None yet
Development

No branches or pull requests

7 participants