-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
armnn cmake configuration warning #659
Comments
Hi @liamsun2019 , Many thanks for raising this issue. We no longer support the Arm NN Quantizer since the 21.05 release of Arm NN (May 2021). I'd recommend that you refer to TF Lite's documentation on how to quantize TensorFlow models: https://www.tensorflow.org/lite/performance/model_optimization It's possible that you are using an old version of one of our guides (or Arm NN repo), could you please share with us the link you are using? If you let us know what you are trying to achieve with your model and Arm NN, we will try to help you. Thanks, |
Hi James, And I git checkout branches 22.05 which is a fairly new version. I follow https://github.com/ARM-software/armnn/blob/branches/armnn_22_05/BuildGuideAndroidNDK.md to build armnn libraries and utilities. The reason why I try to build quantizer is that I have not found armnn quantizer in the prebuilt-binaries section. I have some models in onnx/pytorch format that need to be quantized with int8/uint8. In my previous cases, I conduct quantization with the tools provided by NPU vendors. For example, the quantization is performed basing on some collected images with some specific algorithms such as KL, MAX/MIN etc. When it comes to armnn, I have not figured out the correct way to do the similar thing. Per what you mentioned, my understanding is that I should firstly convert the model to tflite representation and then do quantization. Is that right? |
No problem :) We support both TF Lite (through the C++ TF Lite Parser API and the C++/Python TF Lite Delegate API) and ONNX (through the C++ ONNX Parser API) models. ML operator support is most complete through the C++/Python TF Lite Delegate API and is currently the preferred way to use Arm NN. Based on that, we'd recommend converting to (or reproducing in) TensorFlow/Keras and then to TF Lite. In order to optimize models in TF, a model usually needs to be in TensorFlow floating point format before being converted to quantized TF Lite format. Arm NN does not support TensorFlow .pb or Keras models - they must be converted to TF Lite first: https://www.tensorflow.org/lite/models/convert/ The different methods of quantization provided by TensorFlow can be found here: https://www.tensorflow.org/lite/performance/model_optimization#quantization Hope that helps, feel free to ask anything else. Cheers, |
Hi James, B.R |
No bother, thank you for using our software. James |
Hi James, |
The command line I used to configure armnn is shown as follows: The following error message is then generated: |
Hi @liamsun2019 @catcor01 is looking into this for you now. We don't have a complete guide for what you want (yet, coming soon!) but Cathal will go through the steps and work out a way to best help you. Could you share with us details about your host/target environments please? i.e. hardware, OS etc. It sounds like your host is x86_64. Thanks, |
Hi James, Target platform: Arm cortex A55, android R with api version being 30 Hope it helps. Thanks. |
additional information: The abi is armv7a, instead of armv8 |
cmake version 3.23.2 |
Another test, I followed armnn/BuildGuideCrossCompilation.md to build delegate library using android NDK.
After configuring with above command, make -j32 fails with following error message: As we know, android NKD has no explicit pthread library. I checked libarmnnDelegate.so extracted from prebuilt ArmNN-android-29-armv7a.tar.gz and found no dependency on libpthread.so. I am wondering how can I achieve that.
|
Hi @catcor01, |
Hello @liamsun2019, I have just gotten around to reproducing the same cmake failures you have reported. Please bear with me which I try and find a solution to these issues and I will report back as soon as I have something. I can also look at the prebuilt binaries and see what is the problem there. I suspect the two issues are similar. Thank you for your patience. Kind Regards, Cathal. |
Hi @catcor01, B.R |
Hi @catcor01,
if(NOT "${CMAKE_SYSTEM_NAME}" STREQUAL Android)
elseif (TfLite_LIB MATCHES .so$) This is for building BUILD_UNIT_TESTS which is ON by default. Otherwise, undefined symbol errors will rise up: The compile then goes well with above two modifications. It's not graceful, just my own tests. FYR. Thanks. |
Hello @liamsun2019, You seem to have worked around the issues before I came up with any solution. Interestingly the undefined symbol errors arose for me also. Your solution in I think your 'Could NOT find TfLiteSrc' issue is due to not specifying -DTF_LITE_GENERATED_PATH which points to Here is my final cmake command to build ArmNN:
I had a specific issue myself when building TFLite Android cross compile here:
I found a solution to the problem here which seems to crop up for NDK versions r19+ when the unified toolchain was introduced. Solution is to add
I do apologies that we do not currently have a full guide for what you wanted to do here. Hopefully, in the near future James will land work that will simplify and automate all these tasks. I would think that the prebuilt binaries will continue to have the same issue when trying the run the delegate with Android NDK until Kind Regards, Cathal. |
Hi @catcor01, B.R |
Hello @liamsun2019, I am wondering if you would contribute your fix in Thanks again, Cathal. |
Hi @catcor01, Thanks |
Just a quick update to let you know I have updated our contributor guide here which will be easier to follow. |
Hi @catcor01,
|
Hello @liamsun2019, Yes that export does not seem standard. I will keep it in mind though incase it crops up in the future for others who contribute to ArmNN. I would first try to push and see if any failures occur. However, based on point 1 above I am thinking we might need to add an SSH key. This can be done by Kind Regards, Cathal. |
@liamsun2019 is it possible to use pruned models with arm NN ( with cross compilation) |
Hi @Afef00 , can you please create a new ticket with this question? This does not appear to be relevant to this ticket. Closing ticket for now as @liamsun2019 appears to be satisfied. |
Hi, I am getting a warning when configuring armnn with following command:
CXX=armv7a-linux-androideabi30-clang++ CC=armv7a-linux-androideabi30-clang CXX_FLAGS="-fPIE -fPIC -I/home2/liam/armnn-devenv/armnn/src/armnn" cmake .. -DCMAKE_ANDROID_NDK=$NDK -DCMAKE_SYSTEM_NAME=Android -DCMAKE_SYSTEM_VERSION=30 -DCMAKE_ANDROID_ARCH_ABI=armeabi-v7a -DCMAKE_EXE_LINKER_FLAGS="-pie -llog -lz" -DARMCOMPUTE_ROOT=/home2/liam/armnn-devenv/ComputeLibrary/ -DARMCOMPUTE_BUILD_DIR=/home2/liam/armnn-devenv/ComputeLibrary/build -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 -DPROTOBUF_ROOT=/home2/liam/armnn-devenv/google/arm32_pb_install -DFLATBUFFERS_ROOT=/home2/liam/armnn-devenv/google/flatbuffers_install -DFLATC_DIR=/home2/liam/armnn-devenv/flatbuffers-1.12.0/build -DBUILD_ARMNN_QUANTIZER=1 -DBUILD_ARMNN_SERIALIZER=1 -DFLATBUFFERS_INCLUDE_PATH=/home2/liam/armnn-devenv/google/flatbuffers_install/include -DFLATBUFFERS_LIBRARY=/home2/liam/armnn-devenv/google/flatbuffers_install/lib
CMake Warning:
Manually-specified variables were not used by the project:
I am wondering why such warning happens? My target is to build amrnn quantizer, serializer and libraries.
The text was updated successfully, but these errors were encountered: