Why are the embedding inference results different from the Python version? #464
Unanswered
coolbeevip
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m new to this field and have been using sentence_transformers to embed inference. Recently, I tried text-embeddings-inference and noticed it was significantly faster. However, I found that the inference results varied when I used the same model. What might cause these differences, and can the results be consistent across both approaches?
Python demo
text-embeddings-inference demo
These two examples clearly show that the initial digits of the outputs differ.
Beta Was this translation helpful? Give feedback.
All reactions