-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to download or install .so file for tflite conversion with gpu delegate #61743
Comments
concrete_func = model_beam_search.call.get_concrete_function()# Create a TFLite converter and set the delegate to TfLiteGpuDelegateconverter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], model_beam_search)converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.target_spec.supported_types = [tf.float16]# Replace TfLiteFlexDelegate with TfLiteGpuDelegategpu_delegate = tf.lite.experimental.load_delegate('libtensorflowlite_gpu_delegate.so')converter.experimental_new_converter = True # This flag is needed for using the experimental converterconverter.experimental_new_quantizer = False # You can enable quantization if neededconverter.experimental_enable_resource_variable = False # You can enable resource variables if neededconverter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, gpu_delegate]tflite_model = converter.convert()# Save the TFLite model to a filewith open('testing_gpu.tflite', 'wb') as f: f.write(tflite_model) |
@Alwaysadil Could you please let us know which TF version you are using here and refer to this GPU delegate guide for more information on this. Thank you! |
@sushreebarsa I am using 2.13.0 |
@Alwaysadil Thank you for your quick response! |
@sushreebarsa yes it was helpful for me that u shared GPU delegate documentation, thank you for sharing that ,but i didn't understood how to get .so file libtensorflowlite_gpu_delegate.so |
Hi @Alwaysadil To get
Please refer to this documentation for reference. Thanks. |
@pjpratik could you please provide me the google colab notebook with code to get .so file please? i'm getting errors in google colab (please check my google colab ) https://colab.research.google.com/drive/1aaX-Dm_TySAWWWyR1S6UEiPc5EB9kPjQ#scrollTo=nhrzFEC7GDXr like this WARNING: Target pattern parsing failed. plese help me |
Hi @Alwaysadil The colab shared is currently not accessible. Could you please provide the steps you have followed? You can follow these instructions along with links to download for setting up the configurations in your local machine. Please let us know if you are facing any issue after the following the steps. Thanks. |
Hi @pjpratik thank you for your response.Could you please check my colab notebook it will now accessible,i'm unable to downloading the .so file please kindly go through this colab link https://colab.research.google.com/drive/1aaX-Dm_TySAWWWyR1S6UEiPc5EB9kPjQ#scrollTo=nhrzFEC7GDXr i want to load this with tf please help me |
Hi @Alwaysadil Thanks for sharing the code. I can see that the android ndk and sdk tools have not been configured. The Android NDK is required to build the native (C/C++) TensorFlow Lite code. The current recommended version is 21e, which may be found here. Run the ./configure script in the root TensorFlow checkout directory, and answer "Yes" when the script asks to interactively configure the ./WORKSPACE for Android builds. Also, you can this prebuilt Thanks. |
Hi @pjpratik ould you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y Please specify the home path of the Android NDK to use. [Default is /root/Android/Sdk/ndk-bundle]: The path /root/Android/Sdk/ndk-bundle or its child file "source.properties" does not exist. Please specify the (min) Android NDK API level to use. [Available levels: ['16', '17', '18', '19', '21', '22', '23', '24', '26', '27', '28', '29', '30']] [Default is 26]: Please specify the home path of the Android SDK to use. [Default is /root/Android/Sdk]: /root/android-sdk Please specify the Android SDK API level to use. [Available levels: ['30']] [Default is 30]: Please specify an Android build tools version to use. [Available versions: ['30.0.3']] [Default is 30.0.3]: Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details. path_to_check = '/content/tensorflow/bazel-bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so' if os.path.exists(path_to_check): #downloading the .so file source_path = '/content/tensorflow/bazel-bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so' if os.path.exists(source_path): i have successfully downloaded the libtensorflowlite_gpu_delegate.so see this below link to get newly built .so file https://drive.google.com/file/d/1848HQ4ExO72zkTdQC-yr7rrc7kvVfeeE/view?usp=sharing while loading this .so in new colab notebook file i am getting this below error for both newly built and you shared prebuild .so filecheck this https://colab.research.google.com/drive/1jAaFDTwqRWuISD0nA6OF9d0eSq39t6w1?usp=sharing import tensorflow as tf
|
Hi @Alwaysadil Apologies for the confusion. The delegate can be loaded only if it matches the target architecture. The colab ships with Thanks. |
Hi @pjpratik i didn't get what u said,i tried in local system terminal too still same error occurring, could you please help me to overcome this issue |
Hi @Alwaysadil, help me understand your current state. You are able to use GPU/NNAPI delegates but they aren't improving the performance? If so, can you show/explain the magnitude of performance difference when using those things? |
Hi @pkgoogle thanks for your response these are my imports import java.io.FileInputStream; |
Hi @pkgoogle please check this android project |
Hi @Alwaysadil, I don't have permissions :), I think you can just drag and drop the file(s) into github as well. |
Hi @pkgoogle ,thanks for your response |
Hi @Alwaysadil, I was able to run your project on a Pixel 6 Pro API 34 emulator, it seemed to work... can you direct me to how I may see the issue? Thanks for the info/help! |
Hi @pkgoogle thanks for your response I want to get the output predictions were faster (10-20ms) |
Hi @Alwaysadil I'm not getting the errors... how may I get the errors? |
Hi @pkgoogle |
Hi @Alwaysadil, can you tell me where in the project you shared you are applying GPU/NNAPI delegates? I just want to ensure I'm actually replicating your environment. I don't see it but I do not know your project very well. |
Hi @pkgoogle |
It seems my setup does not use the GPU (info log states it's not being used, my custom code) @arfaian I don't have a physical device to test this, can you please take a look? Thanks. |
Hi @pkgoogle Any other info / logs ERROR: /home/sstc/tensorflow/tensorflow/lite/delegates/gpu/BUILD:134:10: Linking tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc @bazel-out/k8-opt/bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so-2.params Can you help me look at this problem? |
Hi @pkgoogle It is the final linkage step that raise an error stating can't find -lnativewindow |
Hi @pjpratik I'm trying to deploy libtensorflowlite_gpu_delegate.so on ubuntu20.04,but I faied by using this command:bazel build -c opt tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_delegate.so --copt -DEGL_NO_X11=1 Any other info / logs ERROR: /home/sstc/tensorflow/tensorflow/lite/delegates/gpu/BUILD:134:10: Linking tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc @bazel-out/k8-opt/bin/tensorflow/lite/delegates/gpu/libtensorflowlite_gpu_delegate.so-2.params It is the final linkage step that raise an error stating can't find -lnativewindow Can you help me look at this problem? Thanks! |
Hi @wqy123456 |
Hi @Alwaysadil,thank you for your response! |
1. System information
2. Code
Provide code to help us reproduce your issues using one of the following options:
Option A: Reference colab notebooks
Option B: Paste your code here or provide a link to a custom end-to-end colab
3. Failure after conversion
If the conversion is successful, but the generated model is wrong, then state what is wrong:
4. (optional) RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.
5. (optional) Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: