Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

armnn cmake configuration warning #659

Closed
liamsun2019 opened this issue Jun 27, 2022 · 25 comments
Closed

armnn cmake configuration warning #659

liamsun2019 opened this issue Jun 27, 2022 · 25 comments
Labels
Build issue This problem was about building ArmNN or one of its dependencies. Documentation issue

Comments

@liamsun2019
Copy link

Hi, I am getting a warning when configuring armnn with following command:

CXX=armv7a-linux-androideabi30-clang++ CC=armv7a-linux-androideabi30-clang CXX_FLAGS="-fPIE -fPIC -I/home2/liam/armnn-devenv/armnn/src/armnn" cmake .. -DCMAKE_ANDROID_NDK=$NDK -DCMAKE_SYSTEM_NAME=Android -DCMAKE_SYSTEM_VERSION=30 -DCMAKE_ANDROID_ARCH_ABI=armeabi-v7a -DCMAKE_EXE_LINKER_FLAGS="-pie -llog -lz" -DARMCOMPUTE_ROOT=/home2/liam/armnn-devenv/ComputeLibrary/ -DARMCOMPUTE_BUILD_DIR=/home2/liam/armnn-devenv/ComputeLibrary/build -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 -DPROTOBUF_ROOT=/home2/liam/armnn-devenv/google/arm32_pb_install -DFLATBUFFERS_ROOT=/home2/liam/armnn-devenv/google/flatbuffers_install -DFLATC_DIR=/home2/liam/armnn-devenv/flatbuffers-1.12.0/build -DBUILD_ARMNN_QUANTIZER=1 -DBUILD_ARMNN_SERIALIZER=1 -DFLATBUFFERS_INCLUDE_PATH=/home2/liam/armnn-devenv/google/flatbuffers_install/include -DFLATBUFFERS_LIBRARY=/home2/liam/armnn-devenv/google/flatbuffers_install/lib

CMake Warning:
Manually-specified variables were not used by the project:

BUILD_ARMNN_QUANTIZER
PROTOBUF_ROOT

I am wondering why such warning happens? My target is to build amrnn quantizer, serializer and libraries.

@james-conroy-arm
Copy link
Contributor

james-conroy-arm commented Jun 27, 2022

Hi @liamsun2019 ,

Many thanks for raising this issue.

We no longer support the Arm NN Quantizer since the 21.05 release of Arm NN (May 2021). I'd recommend that you refer to TF Lite's documentation on how to quantize TensorFlow models: https://www.tensorflow.org/lite/performance/model_optimization

It's possible that you are using an old version of one of our guides (or Arm NN repo), could you please share with us the link you are using?

If you let us know what you are trying to achieve with your model and Arm NN, we will try to help you.

Thanks,
James

@liamsun2019
Copy link
Author

liamsun2019 commented Jun 27, 2022

Hi James,
Big thanks for your so quick reply. In fact, I am referring to:
https://github.com/ARM-software/armnn
to study armnn.

And I git checkout branches 22.05 which is a fairly new version. I follow https://github.com/ARM-software/armnn/blob/branches/armnn_22_05/BuildGuideAndroidNDK.md to build armnn libraries and utilities. The reason why I try to build quantizer is that I have not found armnn quantizer in the prebuilt-binaries section. I have some models in onnx/pytorch format that need to be quantized with int8/uint8. In my previous cases, I conduct quantization with the tools provided by NPU vendors. For example, the quantization is performed basing on some collected images with some specific algorithms such as KL, MAX/MIN etc. When it comes to armnn, I have not figured out the correct way to do the similar thing.

Per what you mentioned, my understanding is that I should firstly convert the model to tflite representation and then do quantization. Is that right?

@james-conroy-arm
Copy link
Contributor

No problem :)

We support both TF Lite (through the C++ TF Lite Parser API and the C++/Python TF Lite Delegate API) and ONNX (through the C++ ONNX Parser API) models. ML operator support is most complete through the C++/Python TF Lite Delegate API and is currently the preferred way to use Arm NN. Based on that, we'd recommend converting to (or reproducing in) TensorFlow/Keras and then to TF Lite.

In order to optimize models in TF, a model usually needs to be in TensorFlow floating point format before being converted to quantized TF Lite format. Arm NN does not support TensorFlow .pb or Keras models - they must be converted to TF Lite first: https://www.tensorflow.org/lite/models/convert/

The different methods of quantization provided by TensorFlow can be found here: https://www.tensorflow.org/lite/performance/model_optimization#quantization
You'll also find information about clustering and pruning on that page.

Hope that helps, feel free to ask anything else.

Cheers,
James

@liamsun2019
Copy link
Author

Hi James,
Thanks for your so kindly help. I think it's clear enough. I will do some tests and let you know in case of any questions.

B.R
Liam

@james-conroy-arm
Copy link
Contributor

No bother, thank you for using our software.

James

@liamsun2019
Copy link
Author

Hi James,
I followed
https://github.com/ARM-software/armnn/blob/branches/armnn_22_05/BuildGuideAndroidNDK.md
to build armnn libraries and succeeded in generating some .so files such as libarmnnSerializer.so and libarmnn.so etc. Furtherly, I tried to build tflite delegate and parser libraries but failed with kinds of errors.
I followed
https://android.googlesource.com/platform/external/armnn/+/refs/heads/upstream-master/delegate/BuildGuideNative.md
to do the building. Is there any complete guide for building tflite delegate and parser libraries using android NDK?

@liamsun2019
Copy link
Author

The command line I used to configure armnn is shown as follows:
CXX=armv7a-linux-androideabi30-clang++ CC=armv7a-linux-androideabi30-clang CXX_FLAGS="-fPIE -fPIC -I/home2/liam/armnn-devenv/armnn/src/armnn" cmake .. -DCMAKE_ANDROID_NDK=$NDK -DCMAKE_SYSTEM_NAME=Android -DCMAKE_SYSTEM_VERSION=30 -DCMAKE_ANDROID_ARCH_ABI=armeabi-v7a -DCMAKE_EXE_LINKER_FLAGS="-pie -llog -lz" -DARMCOMPUTE_ROOT=/home2/liam/armnn-devenv/ComputeLibrary/ -DARMCOMPUTE_BUILD_DIR=/home2/liam/armnn-devenv/ComputeLibrary/build -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 -DPROTOBUF_ROOT=/home2/liam/armnn-devenv/google/arm32_pb_install -DFLATBUFFERS_ROOT=/home2/liam/armnn-devenv/google/flatbuffers_install -DFLATC_DIR=/home2/liam/armnn-devenv/flatbuffers-1.12.0/build -DBUILD_ARMNN_SERIALIZER=1 -DFLATBUFFERS_INCLUDE_PATH=/home2/liam/armnn-devenv/google/flatbuffers_install/include -DFLATBUFFERS_LIBRARY=/home2/liam/armnn-devenv/google/flatbuffers_arm32_install/lib/libflatbuffers.a -DBUILD_ARMNN_TFLITE_DELEGATE=1 -DTENSORFLOW_ROOT=/home2/liam/armnn-devenv/tensorflow -DTFLITE_LIB_ROOT=/home2/liam/armnn-devenv/tensorflow/build

The following error message is then generated:
Could NOT find TfLiteSrc (missing: TfLite_INCLUDE_DIR
TfLite_Schema_INCLUDE_PATH)

@james-conroy-arm
Copy link
Contributor

james-conroy-arm commented Jun 27, 2022

Hi @liamsun2019

@catcor01 is looking into this for you now. We don't have a complete guide for what you want (yet, coming soon!) but Cathal will go through the steps and work out a way to best help you.

Could you share with us details about your host/target environments please? i.e. hardware, OS etc. It sounds like your host is x86_64.

Thanks,
James

@liamsun2019
Copy link
Author

Hi James,
Following is my development enviornment:

Target platform: Arm cortex A55, android R with api version being 30
Host:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
NDK toolchain: android-ndk-r23c

Hope it helps. Thanks.

@liamsun2019
Copy link
Author

additional information: The abi is armv7a, instead of armv8

@liamsun2019
Copy link
Author

cmake version 3.23.2

@liamsun2019
Copy link
Author

Another test, I followed armnn/BuildGuideCrossCompilation.md to build delegate library using android NDK.

  1. LDFLAGS="-llog" CXX=armv7a-linux-androideabi30-clang++ CC=armv7a-linux-androideabi30-clang cmake .. -DARMCOMPUTE_ROOT=/home2/liam/armnn-devenv/ComputeLibrary -DARMCOMPUTE_BUILD_DIR=/home2/liam/armnn-devenv/ComputeLibrary/build/ -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 -DBUILD_TF_LITE_PARSER=1 -DTENSORFLOW_ROOT=/home2/liam/armnn-devenv/tensorflow -DTF_LITE_SCHEMA_INCLUDE_PATH=/home2/liam/armnn-devenv/tflite -DFLATBUFFERS_ROOT=/home2/liam/armnn-devenv/flatbuffers-arm32 -DFLATC_DIR=/home2/liam/armnn-devenv/flatbuffers-1.12.0/build -DPROTOBUF_ROOT=/home2/liam/armnn-devenv/google/x86_64_pb_install -DPROTOBUF_LIBRARY_DEBUG=/home2/liam/armnn-devenv/google/arm32_pb_install/lib/libprotobuf.so -DPROTOBUF_LIBRARY_RELEASE=/home2/liam/armnn-devenv/google/arm32_pb_install/lib/libprotobuf.so -DTFLITE_LIB_ROOT=/home2/liam/armnn-devenv/tflite/build -DBUILD_ARMNN_TFLITE_DELEGATE=1

After configuring with above command, make -j32 fails with following error message:
ld: error: unable to find library -lpthread

As we know, android NKD has no explicit pthread library. I checked libarmnnDelegate.so extracted from prebuilt ArmNN-android-29-armv7a.tar.gz and found no dependency on libpthread.so. I am wondering how can I achieve that.

  1. To avoid pthread issue, I added -DCMAKE_ANDROID_NDK=$NDK -DCMAKE_SYSTEM_NAME=Android and failed in config phase:
    Could NOT find TfLiteSrc (missing: TfLite_INCLUDE_DIR
    TfLite_Schema_INCLUDE_PATH)

@catcor01 catcor01 added Documentation issue Build issue This problem was about building ArmNN or one of its dependencies. labels Jun 28, 2022
@liamsun2019
Copy link
Author

Hi @catcor01,
Sorry to bother. I notice that you also provide prebuilt binaries, which indicates that you can successfully build tflite delegate libraries utilizing android NDK. I'm now getting errors described above in this issue. Any suggestions are appreciated. Thanks for your time.

@catcor01
Copy link
Collaborator

Hello @liamsun2019,

I have just gotten around to reproducing the same cmake failures you have reported. Please bear with me which I try and find a solution to these issues and I will report back as soon as I have something.

I can also look at the prebuilt binaries and see what is the problem there. I suspect the two issues are similar.

Thank you for your patience.

Kind Regards, Cathal.

@liamsun2019
Copy link
Author

Hi @catcor01,
Thanks for your comment. It's pretty helpful.

B.R
Liam

@liamsun2019
Copy link
Author

liamsun2019 commented Jun 30, 2022

Hi @catcor01,
I keep trying building delegate library with android NDK and followings are my cases:

  1. For pthread issue, I just simply commented out the lines in delegate/CMakeLists.txt

if(NOT "${CMAKE_SYSTEM_NAME}" STREQUAL Android)
#target_link_libraries(armnnDelegate PUBLIC -lpthread)
#target_link_libraries(armnnDelegate PUBLIC -ldl)
endif()
2. Added two extra library search in delegate/cmake/Modules/FindTfLite.cmake:
if (TfLite_LIB MATCHES .a$)
message("-- Static tensorflow lite library found, using for ArmNN build")
find_library(TfLite_abseilstrings_LIB "libabsl_strings.a"
PATH ${TFLITE_LIB_ROOT}/_deps/abseil-cpp-build/absl/strings)
find_library(TfLite_farmhash_LIB "libfarmhash.a"
PATH ${TFLITE_LIB_ROOT}/_deps/farmhash-build)
find_library(TfLite_fftsg_LIB "libfft2d_fftsg.a"
PATH ${TFLITE_LIB_ROOT}/_deps/fft2d-build)
find_library(TfLite_fftsg2d_LIB "libfft2d_fftsg2d.a"
PATH ${TFLITE_LIB_ROOT}/_deps/fft2d-build)
find_library(TfLite_ruy_LIB "libruy.a" PATH
${TFLITE_LIB_ROOT}/_deps/ruy-build)
find_library(TfLite_throw_delegate_LIB "libabsl_throw_delegate.a" PATH
${TFLITE_LIB_ROOT}/_deps/abseil-cpp-build/absl/base)
find_library(TfLite_raw_logging_internal_LIB "libabsl_raw_logging_internal.a" PATH
${TFLITE_LIB_ROOT}/_deps/abseil-cpp-build/absl/base)
find_library(TfLite_flatbuffers_LIB "libflatbuffers.a"
PATH ${TFLITE_LIB_ROOT}/_deps/flatbuffers-build)

find_package_handle_standard_args(TfLite DEFAULT_MSG TfLite_LIB TfLite_abseilstrings_LIB TfLite_ruy_LIB TfLite_fftsg_LIB TfLite_fftsg2d_LIB TfLite_farmhash_LIB TfLite_flatbuffers_LIB TfLite_throw_delegate_LIB TfLite_raw_logging_internal_LIB)
# Set external variables for usage in CMakeLists.txt
if (TFLITE_FOUND)
    set(TfLite_LIB ${TfLite_LIB} ${TfLite_abseilstrings_LIB} ${TfLite_ruy_LIB} ${TfLite_fftsg_LIB} ${TfLite_fftsg2d_LIB} ${TfLite_farmhash_LIB} ${TfLite_flatbuffers_LIB} ${TfLite_throw_delegate_LIB} ${TfLite_raw_logging_internal_LIB})
endif ()

elseif (TfLite_LIB MATCHES .so$)
message("-- Dynamic tensorflow lite library found, using for ArmNN build")
find_package_handle_standard_args(TfLite DEFAULT_MSG TfLite_LIB)
## Set external variables for usage in CMakeLists.txt
if (TFLITE_FOUND)
set(TfLite_LIB ${TfLite_LIB})
endif ()
else()
message(FATAL_ERROR "Could not find a tensorflow lite library to use")
endif()

This is for building BUILD_UNIT_TESTS which is ON by default. Otherwise, undefined symbol errors will rise up:
ld: error: undefined symbol: absl::lts_2020_02_25::base_internal::ThrowStdOutOfRange(char const*)
ld: error: undefined symbol: absl::lts_2020_02_25::raw_logging_internal::RawLog(absl::lts_2020_02_25::LogSeverity, char const*, int, char const*, ...)

The compile then goes well with above two modifications. It's not graceful, just my own tests. FYR. Thanks.

@catcor01
Copy link
Collaborator

catcor01 commented Jul 1, 2022

Hello @liamsun2019,

You seem to have worked around the issues before I came up with any solution. Interestingly the undefined symbol errors arose for me also. Your solution in delegate/cmake/Modules/FindTfLite.cmake worked perfectly for me. I appreciate this. I think this is a change that will need to be made to ArmNN.

I think your 'Could NOT find TfLiteSrc' issue is due to not specifying -DTF_LITE_GENERATED_PATH which points to schema.fbs and schema_generated.h when Generating the TFLite Schema. You might give this a try and verify is it does or does not work in your situation?

Here is my final cmake command to build ArmNN:

CXX=aarch64-linux-android29-clang++ CC=aarch64-linux-android29-clang CXX_FLAGS="-fPIE -fPIC" cmake ${WORKING_DIR}/armnn -DCMAKE_ANDROID_NDK=$NDK -DCMAKE_SYSTEM_NAME=Android -DCMAKE_SYSTEM_VERSION=29 -DCMAKE_ANDROID_ARCH_ABI=arm64-v8a -DCMAKE_EXE_LINKER_FLAGS="-pie -llog -lz" -DARMCOMPUTE_ROOT=$WORKING_DIR/clframework/ -DARMCOMPUTE_BUILD_DIR=$WORKING_DIR/clframework/build/android-arm64v8a/ -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 -DFLATBUFFERS_ROOT=$FLATBUFFERS_ANDROID_BUILD -DFLATC_DIR=$FLATBUFFERS_X86_BUILD -DBUILD_ARMNN_SERIALIZER=1 -DBUILD_GATORD_MOCK=0 -DBUILD_BASE_PIPE_SERVER=0 -DONNX_GENERATED_SOURCES=$HOME/armnn-devenv/onnx -DBUILD_ONNX_PARSER=1 -DPROTOBUF_ROOT=$HOME/armnn-devenv/google/android_pb_install -DBUILD_TF_LITE_PARSER=1 -DTF_LITE_GENERATED_PATH=/home/catcor01/armnn-devenv/tflite -DBUILD_ARMNN_TFLITE_DELEGATE=1 -DTFLITE_LIB_ROOT=$HOME/armnn-devenv/tflite/build/ -DTENSORFLOW_ROOT=$HOME/armnn-devenv/tensorflow

I had a specific issue myself when building TFLite Android cross compile here:

CMake Error at /home/catcor01/armnn-devenv/tflite/build/eigen/CMakeLists.txt:103 (message):
  Can't link to the standard math library.  Please report to the Eigen
  developers, telling them about your platform.

I found a solution to the problem here which seems to crop up for NDK versions r19+ when the unified toolchain was introduced. Solution is to add -DEIGEN_TEST_CXX11=ON to the cmake command. My final command for building TFLite was:

CXX=aarch64-linux-android29-clang++ CC=aarch64-linux-android29-clang cmake -DTFLITE_ENABLE_XNNPACK=OFF -DCMAKE_TOOLCHAIN_FILE=$HOME/armnn-devenv/android-ndk-r20b/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-29 -DCMAKE_SYSTEM_NAME=Android -DEIGEN_TEST_CXX11=ON $HOME/armnn-devenv/tensorflow/tensorflow/lite

I do apologies that we do not currently have a full guide for what you wanted to do here. Hopefully, in the near future James will land work that will simplify and automate all these tasks. I would think that the prebuilt binaries will continue to have the same issue when trying the run the delegate with Android NDK until FindTfLite.cmake is updated which will be fixed in the next release. If you have any persisting issues I am happy to help.

Kind Regards, Cathal.

@liamsun2019
Copy link
Author

liamsun2019 commented Jul 2, 2022

Hi @catcor01,
Big thanks for your so detailed information and sharing your cases with me. I will try them out. My tests showed that some compile issues are really toolchain-specific. I am also trying building objectdetection sample. The resulted x86_64 version runs well although it's very slow as expected. I will try cross-compile and verify on arm cpu, maybe Cortex-A55/75.

B.R
Liam

@catcor01
Copy link
Collaborator

Hello @liamsun2019,

I am wondering if you would contribute your fix in delegate/cmake/Modules/FindTfLite.cmake to ArmNN using the ArmNN contributor guide as it seems to be an important fix that is required to build the delegate? I will be here to help you in any possible way in getting the patch submitted if you encounter any problems.

Thanks again, Cathal.

@liamsun2019
Copy link
Author

Hi @catcor01,
Got it, I will follow up and let you know in case of any questions.

Thanks
B.R
Liam

@catcor01
Copy link
Collaborator

Just a quick update to let you know I have updated our contributor guide here which will be easier to follow.

@liamsun2019
Copy link
Author

Hi @catcor01
I followed the instructions and encountered some problems.

  1. git clone https://review.mlplatform.org/ml/armnn
    fatal: unable to access 'https://review.mlplatform.org/ml/armnn/': server certificate verification failed. CAfile: none CRLfile: none
    I have to export GIT_SSL_NO_VERIFY=1 before clone. But I do not think that's a normal way to solve the problem.

  2. I can visit https://review.mlplatform.org/ and find my account information which shows "registered". Is there still anything that I need to supplement such as SSH key, HTTP passwd? I am just afraid the following commit/push operations may fail due to some potential issues.

@catcor01
Copy link
Collaborator

Hello @liamsun2019,

Yes that export does not seem standard. I will keep it in mind though incase it crops up in the future for others who contribute to ArmNN.

I would first try to push and see if any failures occur. However, based on point 1 above I am thinking we might need to add an SSH key. This can be done by ssh-keygen and pasting the contents of id_rsa.pub in to the 'SSH Keys' tab in 'Settings' menu of https://review.mlplatform.org/. Please let me know if an error occurs for pushing without adding the SSH key first and I can update our contributor guide.

Kind Regards, Cathal.

@Afef00
Copy link

Afef00 commented Oct 5, 2022

@liamsun2019 is it possible to use pruned models with arm NN ( with cross compilation)

@keidav01
Copy link
Contributor

keidav01 commented Oct 7, 2022

Hi @Afef00 , can you please create a new ticket with this question? This does not appear to be relevant to this ticket.

Closing ticket for now as @liamsun2019 appears to be satisfied.

@keidav01 keidav01 closed this as completed Oct 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Build issue This problem was about building ArmNN or one of its dependencies. Documentation issue
Projects
None yet
Development

No branches or pull requests

5 participants