You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, a call to an embedding model will truncate the input text to embed according to the embedding model max_input_tokens if the number of tokens of the input text exceeds this limit.
It would be better to split the text and embed each chunk separately (and keep track of chunk IDs and relationship to one another) to avoid losing data.
The text was updated successfully, but these errors were encountered:
Description
Currently, a call to an embedding model will truncate the input text to embed according to the embedding model max_input_tokens if the number of tokens of the input text exceeds this limit.
It would be better to split the text and embed each chunk separately (and keep track of chunk IDs and relationship to one another) to avoid losing data.
The text was updated successfully, but these errors were encountered: