I noticed that after changing the execute_threshold_percentage configuration, Opencode shows an error toast saying I’ve exceeded the model’s context window.
To preface, I’m using gpt-5.3-codex, which has a 400k context window, and magic-context has already been shrinking the context over time. Even at 30%, that should still be around 120k tokens, so I wouldn’t expect this limit to be hit.
Afterwards, it starts compacting using the built-in Opencode compaction and not magic-context compaction.
Once I reverted the change to the configuration, it seemed to work just fine.
One more thing I noticed: in the TUI popup for /ctx-status the execute threshold stays at 65% and doesn't reflect the configuration.

I noticed that after changing the
execute_threshold_percentageconfiguration, Opencode shows an error toast saying I’ve exceeded the model’s context window.To preface, I’m using gpt-5.3-codex, which has a 400k context window, and magic-context has already been shrinking the context over time. Even at 30%, that should still be around 120k tokens, so I wouldn’t expect this limit to be hit.
Afterwards, it starts compacting using the built-in Opencode compaction and not magic-context compaction.
Once I reverted the change to the configuration, it seemed to work just fine.
One more thing I noticed: in the TUI popup for
/ctx-statusthe execute threshold stays at 65% and doesn't reflect the configuration.