-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Open
Labels
:Search Relevance/VectorsVector searchVector search>enhancementTeam:Search RelevanceMeta label for the Search Relevance team in ElasticsearchMeta label for the Search Relevance team in Elasticsearch
Description
Description
Interestingly, I have noticed in flame graphs, that we do spend a measurable amount of time parsing the knn query for smaller indices. While we only parse the query once, its weird that it shows up at all in the flame graph.
If we accepted a base64 string as the query vector, we could effectively eliminate the xcontent JSON parsing costs.
We COULD rewrite the query on the coordinator given the mapping information (if possible) and then only parse the base64 into a vector just once.
Of course, as with everything, we should benchmark this to see if its worth it.
Metadata
Metadata
Assignees
Labels
:Search Relevance/VectorsVector searchVector search>enhancementTeam:Search RelevanceMeta label for the Search Relevance team in ElasticsearchMeta label for the Search Relevance team in Elasticsearch