You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've checked Int8 inference on ARM platform and it worked very well.
Thank you for great library.
The one drawback on Int8 inference is speed.
On my side, it is twice slower than fp32 inference.
I checked code and found that no neon optimization for conv3x3 is implemented yet.
When will it be available?
I can't wait to see the amazing improvement.
The text was updated successfully, but these errors were encountered:
I've checked Int8 inference on ARM platform and it worked very well.
Thank you for great library.
The one drawback on Int8 inference is speed.
On my side, it is twice slower than fp32 inference.
I checked code and found that no neon optimization for conv3x3 is implemented yet.
When will it be available?
I can't wait to see the amazing improvement.
The text was updated successfully, but these errors were encountered: