From 891a64603c5460732ed9ba3f7a579eaf89d79cae Mon Sep 17 00:00:00 2001 From: Eugenio Date: Thu, 21 Aug 2025 13:56:50 -0600 Subject: [PATCH 1/2] added enhanced window ff section --- docs/cody/core-concepts/token-limits.mdx | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/docs/cody/core-concepts/token-limits.mdx b/docs/cody/core-concepts/token-limits.mdx index 609324fa0..554ac256e 100644 --- a/docs/cody/core-concepts/token-limits.mdx +++ b/docs/cody/core-concepts/token-limits.mdx @@ -31,6 +31,27 @@ Here's a detailed breakdown of the token limits by model: For Cody Enterprise, the token limits are the standard limits. Exact token limits may vary depending on your deployment. Please get in touch with your Sourcegraph representative. For more information on how Cody builds context, see our [docs here](/cody/core-concepts/context). +## Enhanced Context Windows (Feature Flag) + +Since 6.5 for Enterprise, we rolled out a feature flag `enhanced-context-window` that significantly expands Cody's context capabilities. This feature addresses developers' need to work with more context by expanding both input and output context windows. + +When the `enhanced-context-window` feature flag is enabled, Cody Enterprise customers get access to: + +**Input context window (via @mention and user input):** +- Anthropic Claude: up to **150k tokens** +- Google Gemini: up to **150k tokens** +- OpenAI GPT-series: up to **102k tokens** +- OpenAI o-series: up to **93k tokens** + +**Output context window:** +- Anthropic Claude: up to **64k tokens** +- Google Gemini: up to **65k tokens** +- OpenAI GPT-series: **16k tokens** +- OpenAI o-series: **100k tokens** +- Reasoning models: up to **100k tokens** + +The enhanced context windows require the `enhanced-context-window` feature flag to be set to `true` in your Sourcegraph instance. Contact your Sourcegraph support if you need help enabling this feature. + ## What is a Context Window? A context window in large language models refers to the maximum number of tokens (words or subwords) the model can process simultaneously. This window determines how much context the model can consider when generating text or code. From 350224896d78f084b580141e22ab428c61d7f030 Mon Sep 17 00:00:00 2001 From: Eugenio Date: Thu, 21 Aug 2025 14:16:07 -0600 Subject: [PATCH 2/2] small grammar fix --- docs/cody/core-concepts/token-limits.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/cody/core-concepts/token-limits.mdx b/docs/cody/core-concepts/token-limits.mdx index 554ac256e..66d1561fe 100644 --- a/docs/cody/core-concepts/token-limits.mdx +++ b/docs/cody/core-concepts/token-limits.mdx @@ -50,7 +50,7 @@ When the `enhanced-context-window` feature flag is enabled, Cody Enterprise cust - OpenAI o-series: **100k tokens** - Reasoning models: up to **100k tokens** -The enhanced context windows require the `enhanced-context-window` feature flag to be set to `true` in your Sourcegraph instance. Contact your Sourcegraph support if you need help enabling this feature. +The enhanced context windows require the `enhanced-context-window` feature flag to be set to `true` in your Sourcegraph instance. Contact Sourcegraph support if you need help enabling this feature. ## What is a Context Window?