-
Notifications
You must be signed in to change notification settings - Fork 74.9k
Closed
Labels
TF 2.7Issues related to TF 2.7.0Issues related to TF 2.7.0comp:liteTF Lite related issuesTF Lite related issuesstaleThis label marks the issue/pr stale - to be closed automatically if no activityThis label marks the issue/pr stale - to be closed automatically if no activitystat:awaiting responseStatus - Awaiting response from authorStatus - Awaiting response from authortype:bugBugBug
Description
Click to expand!
Issue Type
Bug
Source
binary
Tensorflow Version
2.3.0
Custom Code
Yes
OS Platform and Distribution
windows10
Mobile device
android
Python version
3.6
Bazel version
4.2.1
GCC/Compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current Behaviour?
I use tflite android api to get sentence embeddings from my tflite model.
`Map<Integer, Object> outputs = new HashMap<>();
outputs.put(0, embeddings);
tflite.runForMultipleInputsOutputs(new Object[]{inputIds, attentionMask}, outputs);`
TFlite android inference produces slightly different results for the same input, is it normal? What causes the small differences?
query: -0.03889307#-0.20992874#-0.012840451...
query: -0.038892966#-0.20992874#-0.012840495...
query: -0.03889298#-0.20992877#-0.012840421...
Standalone code to reproduce the issue
The code is inconvenient to share, I just want to know if this phenomenon is normal? If it is not normal, is there a way to avoid it?
Relevant log output
No response
Metadata
Metadata
Assignees
Labels
TF 2.7Issues related to TF 2.7.0Issues related to TF 2.7.0comp:liteTF Lite related issuesTF Lite related issuesstaleThis label marks the issue/pr stale - to be closed automatically if no activityThis label marks the issue/pr stale - to be closed automatically if no activitystat:awaiting responseStatus - Awaiting response from authorStatus - Awaiting response from authortype:bugBugBug