You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the neural search query only accepts the model id alongside the text to be encoded, which requires a model to be registered into a pipeline. The query should also support passing in the vector directly, bypassing the pipeline phase. It can be beneficial for clients to do the encoding for several reasons: ad hoc analysis, unit testing, custom/unsupported models.
Do you mean the neural_sparse query clause? Seems you're using query tokens and weights to search, and paste a link about sparse search. But neural query is using dense models
Correct, sorry for the confusion. Used the wrong query in my example, probably due to never having used the neural_sparse query. Updated the example and add a link to the sparse search.
model-collapse
changed the title
[FEATURE] Support for vectors as parameters in the neural search query
[FEATURE] Support for raw sparse vectors input in the neural sparse query
Apr 2, 2024
Hi @brusic , our enhancements has been merged now and will be released at 2.14 version. Now users can just use neural sparse query with raw tokens. Sample query:
Is your feature request related to a problem?
Neural sparse search
Currently the neural search query only accepts the model id alongside the text to be encoded, which requires a model to be registered into a pipeline. The query should also support passing in the vector directly, bypassing the pipeline phase. It can be beneficial for clients to do the encoding for several reasons: ad hoc analysis, unit testing, custom/unsupported models.
What solution would you like?
Accept a vector, similar to knn search
What alternatives have you considered?
rank_features
is a close alternative, but can only rank (boost) other query clauses.Do you have any additional context?
ES will soon have a weighted_tokens query, which is analogous to their text_expansion query.
The text was updated successfully, but these errors were encountered: