Description
OpenCode does not gracefully handle when the model context is full. Instead, the current message remains in loading state forever and never finishes and it isn't obvious that the context limit is full because no error is being surfaced.
opencode version: Any. Literally this bug existed for months and still does
Windows
Steps to reproduce: Point custom provider at litellm with for example gpt models and just have a very long session. That session will eventually get suck on a forever loading state with no error or indication that the context is full and needs compaction
Description
OpenCode does not gracefully handle when the model context is full. Instead, the current message remains in loading state forever and never finishes and it isn't obvious that the context limit is full because no error is being surfaced.
opencode version: Any. Literally this bug existed for months and still does
Windows
Steps to reproduce: Point custom provider at litellm with for example gpt models and just have a very long session. That session will eventually get suck on a forever loading state with no error or indication that the context is full and needs compaction