-
Couldn't load subscription status.
- Fork 74.9k
Closed
Labels
Description
Current, my model's inference speed (mate 10) 300ms
We have applied all the structural transformations that can be done in the model, algorithm
So, I am looking for a way to improve the performance of the Android platform.
Example of Render Script, OpenCL...
But, don't know if these ways really improve inference performance dramatically.
I wonder if these methods are applied to Tensorflow Mobile(including TFLite) at present.
I also wonder which method is the most appropriate.
Have a nice day :)