-
Couldn't load subscription status.
- Fork 25.6k
Closed
Labels
:mlMachine learningMachine learning:ml/Chunking>enhancementFeature:GenAIFeatures around GenAIFeatures around GenAIFeature:NLPFeatures and issues around NLPFeatures and issues around NLPTeam:MLMeta label for the ML teamMeta label for the ML team
Description
Description
As part of #121567 we are working to implement chunking for rerank inference calls (at least for the elastic reranker). The initial design of this will be to always chunk documents sent for rerank and return a relevance score for each document that corresponds with the highest score for any of it's chunks. The chunking strategy will be selected for the user based on the token limit. The purpose of this issue is to look into any configurable settings the user may want to be able to control this process. We will need to identify both what values the user might want to configure and where these settings may be stored.
Metadata
Metadata
Assignees
Labels
:mlMachine learningMachine learning:ml/Chunking>enhancementFeature:GenAIFeatures around GenAIFeatures around GenAIFeature:NLPFeatures and issues around NLPFeatures and issues around NLPTeam:MLMeta label for the ML teamMeta label for the ML team