Interpreter API (Java) - GpuDelegateV2 support #65114
Labels
Android
comp:lite
TF Lite related issues
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
TF 2.15
For issues related to 2.15.x
TFLiteGpuDelegate
TFLite Gpu delegate issue
type:feature
Feature requests
Hi,
I am trying to run a TFLite model on the GPU of an Android device.
According to this documentation, it is possible to use both the
Interpreter API
and theNative c++ API
to achieve this.At the moment, I am using the following dependencies:
I was able to successfully run my model using the
GPUDelegate
provided by the Java Interpreter API. However, this delegate does not allow to specify inference priority options (TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY
,TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE
,TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION
).These options can be specified if the Native C++ API is used given the presence of GpuDelegateV2. However, at the moment I don't see this option in the Interpreter API since there is no class named
GpuDelegateV2
.Is there a way to make use of this new delegate without the need of using the Native C++ API?
The text was updated successfully, but these errors were encountered: