-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Description
Please read this first
- Have you read the docs?Agents SDK docs - yes
- Have you searched for related issues? Others may have had similar requesrs - yes
Describe the feature
Currently, the framework handles message passing in a loop, collecting the conversation from the LLM and passing it back to the model at each iteration. This works fine for shorter conversations, but it may become less efficient as the conversation grows longer. In such cases, it could be beneficial to merge previous messages or recalculate the messages based on the context stored locally in memory.
Given the dynamic nature of conversations, it’s important to step back and consider whether the messages passed to the model truly represent the context needed for a useful response. Simply passing all previous messages verbatim might not always be the most effective strategy.
Proposed Solution:
It would be great to have an agent hook that allows recalculating or adjusting the messages before passing them to the model. This would give us more control over how the conversation context is handled, especially when dealing with long dialogues or evolving contexts.