Fix maxContextLength
not being passed to openai adapter
#94
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR partially addresses #93 but may not resolve the problem for other adapters. A preset's
maxContextLength
was being incorrectly stripped from the settings object used by the openai adapter to figure out its token budget, causing it to fall back to thebasic
preset value of 2048.Might be happening for other adapters too, but maybe obfuscated by most other services not supporting >2048 context to begin with.Better solution might be to special-case this value rather than add it to every serviceGenMap since it seems like it's the kind of thing that 1) every service requires and 2) doesn't even need to be sent to the downstream service at all since it's used to build the prompt within Agnai.Edit: Following scueick's clarification I think this probably doesn't happen on any other adapters and is specific to Turbo because of how it has to transform the prompt built by the client into the weird format expected by OAI's chat endpoint.