-
-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build with XNNPACK #36
Comments
Could this boost the performance? |
Hi @TheBricktop, This is because the GPU delegate (a GPU acceleration feature of the TensorFlow Lite) is not supported on Window at the moment. I would like to support it eventually. |
What ones should do to implement it? Recompile tf lite library? |
It looks like to be not officially supported. |
So on android the performance would be higher? |
Yes, if you are interested in using MediaPipe without the GPU delegate, please refer XNNPACK(this issue) or the integer quantized model |
Would exporting tflite models to onnx and running in barracuda improve the performance? |
Yes, if all ops are supported in Barracuda, could improve it. Please refer to the supported ops in Barracuda |
Now, XNNPACK options are enabled in v2.4 libraries |
@asus4 I suspect XNNPACK is not correctly enabled based on the following observations:
Would you like to take a look? |
Thanks, @tonysung I mistakenly thought it would be automatically enabled on the CPU mode. I will add XNNPack delegate. |
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack
The text was updated successfully, but these errors were encountered: