Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linking an Android library with TFLite GPU using CMake causes undefined symbol errors #61312

Open
GoldFeniks opened this issue Jul 18, 2023 · 15 comments
Assignees
Labels
awaiting review Pull request awaiting review comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 type:build/install Build and install issues

Comments

@GoldFeniks
Copy link

GoldFeniks commented Jul 18, 2023

Issue type

Build/Install

Have you reproduced the bug with TensorFlow Nightly?

No

Source

source

TensorFlow version

2.13

Custom code

Yes

OS platform and distribution

Linux 6.3.1, EndeavourOS

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/compiler version

clang version 14.0.7

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

Linking an Android library with libtensorflow-lite.a using CMake with GPU delegate enabled causes undefined symbol errors

Standalone code to reproduce the issue

Please find a minimal test case here.

Relevant log output

ld: error: undefined symbol: tflite::delegates::BackendAsyncKernelInterface::BackendAsyncKernelInterface()
>>> referenced by delegate.cc:705 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:705)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> did you mean: tflite::delegates::BackendAsyncKernelInterface::~BackendAsyncKernelInterface()
>>> defined in: tensorflow/tensorflow/lite/libtensorflow-lite.a(delegate.cc.o)

ld: error: undefined symbol: kTfLiteSyncTypeNoSyncObj
>>> referenced by string.h:61 (/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/bits/fortify/string.h:61)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by string.h:61 (/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/bits/fortify/string.h:61)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteAttributeMapIsBufferAttributeMap
>>> referenced by delegate.cc:1058 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1058)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:908 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:908)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:909 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:909)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 1 more times

ld: error: undefined symbol: tflite::delegates::utils::ReadBufferAttrs(TfLiteAttributeMap const*)
>>> referenced by delegate.cc:1061 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1061)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:925 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:925)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteBackendBufferGetPtr
>>> referenced by delegate.cc:1087 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1087)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_acquire
>>> referenced by delegate.cc:787 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:787)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_describe
>>> referenced by delegate.cc:803 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:803)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:803 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:803)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_release
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Acquire(AHardwareBuffer*)::'lambda'(AHardwareBuffer*)::__invoke(AHardwareBuffer*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::WriteBufferAttrs(tflite::delegates::utils::BufferAttributes const&, TfLiteAttributeMap*)
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 2 more times

ld: error: undefined symbol: TfLiteAttributeMapIsSyncAttributeMap
>>> referenced by delegate.cc:933 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:933)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:934 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:934)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:941 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:941)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 1 more times

ld: error: undefined symbol: tflite::delegates::utils::ReadSyncAttrs(TfLiteAttributeMap const*)
>>> referenced by delegate.cc:950 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:950)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:983 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:983)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::SetAttributes(TfLiteOpaqueContext*, TfLiteOpaqueNode*, int, TfLiteAttributeMap const*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::WriteSyncAttrs(tflite::delegates::utils::SyncAttributes const&, TfLiteAttributeMap*)
>>> referenced by delegate.cc:952 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:952)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:954 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:954)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteSynchronizationGetPtr
>>> referenced by delegate.cc:1256 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1256)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::WaitForAllFds(absl::lts_20230125::Span<int const>)
>>> referenced by delegate.cc:1268 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1268)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::ConvertToTfLiteStatus(absl::lts_20230125::Status)
>>> referenced by delegate.cc:1308 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1308)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1289 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1289)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_unlock
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs::~LockedAHWBs()) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteSynchronizationSetPtr
>>> referenced by delegate.cc:1328 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1328)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_lock
>>> referenced by delegate.cc:1185 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1185)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
@google-ml-butler google-ml-butler bot added the type:build/install Build and install issues label Jul 18, 2023
@williamdias
Copy link

Hey @GoldFeniks,

I've got the same problem when building TFLite 2.13.0 with CMake. I manage to fix it by editing tensorflow/lite/CMakeLists.txt.

Before if(TFLITE_ENABLE_GPU) I added the following lines:

populate_tflite_source_vars("core/async/interop" TFLITE_CORE_ASYNC_INTEROP_SRCS)
populate_tflite_source_vars("core/async/interop/c" TFLITE_CORE_ASYNC_INTEROP_C_SRCS)
populate_tflite_source_vars("delegates/utils" TFLITE_DELEGATES_UTILS_SRCS)
populate_tflite_source_vars("async" TFLITE_ASYNC_SRCS)

Then, after set(_ALL_TFLITE_SRCS I added the following lines:

${TFLITE_CORE_ASYNC_INTEROP_SRCS}
${TFLITE_CORE_ASYNC_INTEROP_C_SRCS}
${TFLITE_DELEGATES_UTILS_SRCS}
${TFLITE_ASYNC_SRCS}

@GoldFeniks
Copy link
Author

@williamdias Thank you! Interesting, looks like some sources got lost somewhere

@sushreebarsa sushreebarsa added comp:lite TF Lite related issues TF 2.13 For issues related to Tensorflow 2.13 labels Jul 22, 2023
@sushreebarsa
Copy link
Contributor

@GoldFeniks Could you please let us know if the issue has been resolved for you ?
Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label Jul 24, 2023
@GoldFeniks
Copy link
Author

@sushreebarsa It helped resolve the TFLite related unresolved symbols, but AHardwareBuffer symbols still cannot be found

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Jul 25, 2023
@williamdias
Copy link

@GoldFeniks, as for AHardwareBuffer symbols, try adding the following flag to cmakecommand:

-DANDROID_PLATFORM="26"

@AntonMalyshev
Copy link

Adding -DANDROID_PLATFORM="26" (i.e. setting minSdkVersion to 26) makes the build incompatible with any Android versions below 8.0 (API level 26), doesn't it?

In the docs the minimal supported version is still 19 or 21 for most of the modules: https://www.tensorflow.org/lite/android/development
Is it changed?

@GoldFeniks
Copy link
Author

@williamdias I have specified that as follows (see the build.sh)

-DANDROID_PLATFORM=android-26

changing it to just "26" has no effect.

@williamdias
Copy link

Hey @AntonMalyshev, yes. It will drop compatibility to versions below 8.0. I think AHardwareBuffer symbols were introduced in API 26. I tried to build for API 21, 22, 23, 24, 25 and failed in all of them. Just worked with API 26.

@GoldFeniks, here's my cmake config command:

cmake \
    -DCMAKE_BUILD_TYPE="release" \
    -DCMAKE_TOOLCHAIN_FILE="$ANDROID_NDK_HOME/build/cmake/android.toolchain.cmake" \
    -DANDROID_PLATFORM="26" \
    -DANDROID_ABI="arm64-v8a" \
    -DTFLITE_ENABLE_GPU=ON \
    -DXNNPACK_ENABLE_ARM_BF16=OFF \
     ../../tensorflow/lite

I am using NDK 21.4.7075529 instead of 25. Also, I had to disable XNNPACK_ENABLE_ARM_BF16 as advised here.

@GoldFeniks
Copy link
Author

@williamdias Thank you, tensorflow does in fact build with such settings, but trying to link to such tensorflow binary with anything that uses GPU delegate results in the linking errors.

@williamdias
Copy link

@GoldFeniks, hum, I was able to use the binary and run models on GPU. What errors are you getting? The only downside is that I had to drop support to Android < 8.0 (API 26).

@GoldFeniks
Copy link
Author

GoldFeniks commented Jul 25, 2023

@williamdias Interesting. Are you building static or dynamic library?
In the sample I'm trying to link to a static tensorflow binary through cmake as follows

cmake_minimum_required(VERSION 3.26)

project(tflite_link_issue C CXX)

set(CMAKE_CXX_STANDARD 17)

add_subdirectory(tensorflow/tensorflow/lite)

add_library(gpu SHARED gpu.hpp gpu.cpp)
target_link_libraries(gpu tensorflow-lite)

And I only call these functions in gpu.cpp

TfLiteGpuDelegateOptionsV2 options = TfLiteGpuDelegateOptionsV2Default();
const auto tf_delegate = tflite::Interpreter::TfLiteDelegatePtr(TfLiteGpuDelegateV2Create(&options), TfLiteGpuDelegateV2Delete);

Which gives me a bunch of no symbol errors for AHardwareBuffer_* (see the issue header).

@williamdias
Copy link

@GoldFeniks, I am building static tensorflow-lite and then another static lib on top of it.

Try to check if the the symbols are present in tensorflow-lite.a. Use nm command.

@GoldFeniks
Copy link
Author

GoldFeniks commented Jul 25, 2023

Turns out AHardwareBuffer_* functions require linking to libandroid. So adding

find_library(android-lib android REQUIRED)

and changing target_link_libraries to

target_link_libraries(gpu tensorflow-lite ${android-lib})

fixes the problem.

@pjpratik
Copy link
Contributor

@williamdias Thank you for the pointers.

@GoldFeniks Thanks for the PR. The issue will be closed once PR #61381 is merged.

@pjpratik pjpratik added the awaiting review Pull request awaiting review label Jul 26, 2023
@pkgoogle pkgoogle assigned pkgoogle and alankelly and unassigned pjpratik Jan 30, 2024
@pkgoogle pkgoogle added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jan 31, 2024
@pkgoogle
Copy link

Hi @alankelly, it seems like the PR needs a review so I'm assigning this to you for now. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting review Pull request awaiting review comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 type:build/install Build and install issues
Projects
None yet
Development

No branches or pull requests

7 participants