Skip to content

Fetched documentation exeeds context window limit - Automatic truncation possible? #1212

@WismutHansen

Description

@WismutHansen

When opencode pulls documentation from websites, the resulting response can sometimes exceed the context length of the current model in use (currently Claude sonnet 4 for me). It's impossible to continue this sessions in this case. Manually editing the sessions logs and removing the content makes it possible to resume the session.

Could we implement a token counter that considers how many tokens are left in a given session and truncates the fetched content to a size that still fits the context window (including a short hint for the model)?

Here is the error message for reference:

AI_APICallError: input length and max_tokens exceed context limit: 184714 + 32000 > 200000, decrease input length or max_tokens and try again

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions