diff --git a/docs-chatbot.md b/docs-chatbot.md
index f36e0730e..cfc224f84 100644
--- a/docs-chatbot.md
+++ b/docs-chatbot.md
@@ -9,3 +9,11 @@ pnpm langbase-sync
```
This will verify all the changes since the last sync, update these files, and then write the commit hash to `baseai/memory/docs/index.ts` file which you should commit to keep track.
+
+### Update llms.txt file
+
+Run the following command to update the `llms.txt` file:
+
+```sh
+pnpm llmstxt
+```
diff --git a/public/llms.txt b/public/llms.txt
index b538532f3..6f5a0f4b9 100644
--- a/public/llms.txt
+++ b/public/llms.txt
@@ -5,6 +5,735 @@ This page documents all notable changes to Sourcegraph. For more detailed change
{/* CHANGELOG_START */}
+# 6.1 Patch 3
+
+## v6.1.2889
+
+- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v6.1.2889)
+
+- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v6.1.2889)
+
+- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v6.1.2889)
+
+- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v6.1.2889)
+
+### Features
+
+#### Search
+
+- Adds `Team` support in SvelteKit ownership panel `(PR #3830)`
+ - Implement ownership information for the Sveltekit rewrite. Backport 30f3ea6a6c115c633291c388ff599ff107d7f38b from #3738
+- add ownership panel MVP to sveltekit app `(PR #3829)`
+ - Adds read-only ownership panel to Sveltekit frontend
+ - Gated behind 'svelte-ownership' feature flag
+ Backport 469f2ea37a214f3c0eb1cbde012625bdbef84b6f from #3558
+
+### Fix
+
+#### Batch Changes
+
+- transformChanges.group.directory should not interpret file names as directories `(PR #3726)`
+ - fix(batches): transformChanges.group.directory should not interpret file names as directories Backport 6ffff463be1743b89ab865018e46a34ff4e549f5 from #3721
+
+#### Cody-Gateway
+
+- removes ModelCapabilityEdit from Claude 3.7 Sonnet `(PR #3737)`
+
+#### Search
+
+- missing symbol changes from merge commits in Rockskip `(PR #3844)`
+ - This fixes a bug in Rockskip (symbol search) where we would miss symbol changes introduced by merge commits. This bug manifested in incorrect search results and errors of symbols service similar to "pathspec (...) did not match any files". Backport d8426a9aec4930ce71922562fdebdcfd0d657cb4 from #3699
+
+#### Source
+
+- Fix bug where the token always has to be entered when editing certain code host connections `(PR #3751)`
+ - Fixed issue where the code host connection editor would always ask for the token to be re-entered. Backport 428c1eef19d68b38037bd35457632a007a78494d from #3719
+
+#### Workspaces
+
+- apply jitter to global reconciler `(PR #3772)`
+
+#### Others
+
+- update cody web to 0.31.1 to fix issue with pasting linebreaks (#3696) `(PR #3729)`
+ - fix: prompt templates should not fail when pasting linebreaks
+
+### Chore
+
+#### Security
+
+- Update Caddy `(PR #3906)`
+
+### Refactor
+
+#### Search
+
+- Normalize `displayName` across `Person` and `Team` types `(PR #3825)`
+
+### Reverts
+
+ There were no reverts for this release
+
+{/* RSS={"version":"v6.1.2889", "releasedAt": "2025-03-05"} */}
+
+
+# 6.1 Patch 2
+
+## v6.1.1295
+
+- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v6.1.1295)
+
+- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v6.1.1295)
+
+- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v6.1.1295)
+
+- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v6.1.1295)
+
+### Features
+
+#### Cody-Gateway
+
+- add thinking/reasoning support to Anthropic models `(PR #3708)`
+ - Added support for chain-of-thought reasoning in Anthropic models, allowing users to see the model's thinking process for complex tasks. Backport 389bf9a4f2cf8ed7762cf8876b0efe4064e2b234 from #3507
+
+### Reverts
+
+ There were no reverts for this release
+
+{/* RSS={"version":"v6.1.1295", "releasedAt": "2025-02-25"} */}
+
+
+# 6.1 Patch 1
+
+## v6.1.376
+
+- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v6.1.376)
+
+- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v6.1.376)
+
+- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v6.1.376)
+
+- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v6.1.376)
+
+### Features
+
+#### Agents
+
+- add support for globally enabled rules (#3480) `(PR #3502)`
+- simplify agent admin onboarding experience `(PR #3501)`
+ - You can now create a GitHub App with all the right permissions/events for code review agents. Previously, you had to manually customize the apps. Backport 87e8d77f464912cbd4356a8c3c39fba1d099e3b6 from #3473
+
+#### Cody
+
+- add "autocomplete" capability to Claude Haiku models `(PR #3638)`
+
+#### Tenant
+
+- telemetry for adding code `(PR #3560)`
+
+### Fix
+
+#### Agents
+
+- fix bug when using global rules in default revision `(PR #3524)`
+- fix bug where POST /reviews always failed `(PR #3522)`
+- improve error reporting in `POST /reviews` `(PR #3518)`
+- render errors as strings in agent run logs `(PR #3516)`
+ - Errors are now rendered as strings in agent run logs. Previously, they rendered as `Source: {}`, which wasn't helpful.
+ Backport d1f18d280fc9d3a985a284c31e819412d2add606 from #3514
+- hide listing of rules to fix unconditional error `(PR #3498)`
+
+#### Code Intelligence
+
+- Correctly handle document counts exceeding MaxInt32 `(PR #3596)`
+ - Fixes a bug in SCIP index processing for instances with a long history
+of processing large uploads. Backport 48e7b47898ee7710f12270c6861c335a2ef75f48 from #3595
+
+#### Release
+
+- check for and remove timescaledb extension `(PR #3584)`
+ - fix(rel): remove TimescaleDB from existing database if found during upgrade to Postgres 16 on the codeinsights database.
+ Backport 71b4af3d6faef054803db0151b2cc7b151bb1c0e from #3556
+
+#### Security
+
+- Allow the admin's HTTP auth provider headers in CORS preflight requests `(PR #3540)`
+ - HTTP header auth username and email headers, if configured, are no longer blocked by CORS. Backport 782b98a780dac335576b8f43affb4b1a10123882 from #3512
+
+#### Others
+
+- transformChanges.group.directory should ignore file names `(PR #3594)`
+ - fix: transformChanges.group.directory now ignores file names
+ Backport 3b76fe4ab146565b0e736231353b1e24f1468241 from #3576
+- Fix missing JSON schema on serve-git connection page `(PR #3567)`
+- do not close stream when tab is unfocused `(PR #3529)`
+ - Fixes an (unreleased) issue that could cause duplicate search results when switching between tabs.
+ Backport 746a29a25d6e54dfe7ab38f70855a9f58a652426 from #3528
+
+### Chore
+
+#### Agents
+
+- Allow non-site-admins to read agent endpoints [CODY-4962] `(PR #3511)`
+ - GET access to `/.api/agent/*` endpoints for non-site admins Backport 321543c38aab312c7d3924e19617f9f247c3a5fa from #3504
+
+### Reverts
+
+ There were no reverts for this release
+
+### Uncategorized
+
+#### Others
+
+- Backport 3542 to 6.1.x `(PR #3612)`
+- [Backport 6.1.x] tenant: Report newRepositoryTotalSizeBytes for setSelectedRepos `(PR #3568)`
+
+{/* RSS={"version":"v6.1.376", "releasedAt": "2025-02-19"} */}
+
+
+# 6.1 Patch 0
+
+## v6.1.0
+
+- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v6.1.0)
+
+- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v6.1.0)
+
+- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v6.1.0)
+
+- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v6.1.0)
+
+### Features
+
+#### Agents
+
+- allow code review agent to auto-run based on feature flags `(PR #3477)`
+ - Code review agents can now automatically run on GitHub Pull Requests (actions: `opened` and `synchronize`) based on a feature flag. Both boolean (true/false) and rollout (percentage-based ) feature flags are supported. For example, this means you can enable automatic reviews on 10% of all opened PRs.
+- Review Diagnostic Feedback [CODY-4951] `(PR #3456)`
+ - Adds a feedback UI for diagnostics within the Agents app.
+- report progress with GitHub Commit Status API `(PR #3445)`
+ - The Code Review Agent now reports live status with the GitHub Commit Status API making it possible to open Agent logs directly from GitHub.
+- Make Review Agent leverage PR title and description (CODY-4749) `(PR #3431)`
+- make review triggers configurable `(PR #3368)`
+ - You can now request review from the Review Agent by posting a pull request comment with a configurable substring
+- Code reviews deduplicate diagnostics from historical review [CODY-4743] `(PR #3355)`
+ - Code reviews deduplicate diagnostics from historical reviews
+- add basic review rules to repo `(PR #3335)`
+- rules-related API improvements `(PR #3305)`
+- read the cody repo for rules as well `(PR #3212)`
+- show rules that apply to a file in the code view `(PR #3200)`
+- more agents UI updates `(PR #3177)`
+- add Run API, view live progress on agent runs `(PR #3171)`
+ - Add `GET /.api/agents/runs` to list runs of an agent, and other related endpoints including the ability to view logs
+- make Review agent handle large diffs `(PR #3136)`
+ - The Review agent can now review larger diffs.
+- expose rules API endpoints and use rule URIs instead of IDs `(PR #3056)`
+- Adds Conversation HTTP handlers and generated DB columns [CODY-4751] `(PR #3021)`
+ - Adds HTTP handlers for `/.api/conversations` for creating and filtering conversations as well as a DB method for querying conversations.
+- agents UI and requisite new APIs `(PR #2818)`
+
+#### Code Intelligence
+
+- Syntactic indexing job resetter `(PR #3336)`
+- Periodically delete old audit logs from syntactic jobs `(PR #3278)`
+
+#### Cody
+
+- Add Prompt Caching to Code Context (CODY-4807) `(PR #3198)`
+
+#### Cody-Gateway
+
+- add actor-auth-status metric `(PR #3460)`
+- Add cache related token usage data to telemetry(CODY-4808) `(PR #3396)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+- Roll out new Gemini Models `(PR #3357)`
+ - Move Gemini 2.0 Flash from Experimental to GA, add Gemini 2.0 Flash-Lite Preview Experimental and Gemini 2.0 Pro Experimental
+
+#### Enterpriseportal
+
+- add tracking for verified domains `(PR #3447)`
+
+#### Lib/Msp
+
+- add OTEL instrumentation to gRPC clients by default `(PR #3428)`
+
+#### Local
+
+- install mise as part of `sg setup` and deprecate asdf `(PR #2877)`
+
+#### Model-Config
+
+- Implement reasoning parameters for OpenAI models `(PR #3489)`
+
+#### Msp/Auditlog
+
+- add trace ID to response headers `(PR #3248)`
+
+#### Release
+
+- Add init subsection to release.yaml `(PR #3223)`
+ - feat: add init cmd subsection to release.yaml parser
+ - feat: add init section to release.yaml
+
+#### Release
+
+- `sg upgradetest` `(PR #3388)`
+ - allow the upgradetest to be run locally without knowing the bazel invocation required to stamp the build etc
+ - unlock local minor and major branch upgradetests
+
+#### Search
+
+- Add enterprise starter cta to dotcom search home `(PR #3090)`
+
+#### Telemetry
+
+- add events for repo page views `(PR #3265)`
+- add billing metadata to toggles `(PR #3264)`
+
+#### Tenant
+
+- fix workspace management UI quirks `(PR #3227)`
+
+#### Web
+
+- Update cloning status to new designs `(PR #2760)`
+
+#### Workspace
+
+- workspace creation error UX `(PR #3383)`
+
+#### Workspaces
+
+- attach cf ray id to otel traces `(PR #3387)`
+- add WORKSPACES_ROUTER_DISABLE_RECONCILERS `(PR #3382)`
+- upsert/delete route on manual reconcile `(PR #3378)`
+- contact support to claim a name `(PR #3340)`
+- record clientside payment errors `(PR #3319)`
+- remove profanity blocks `(PR #3317)`
+- add cancellation/churn notifications in Slack `(PR #3314)`
+- add 'code search and code navigation' to plan page `(PR #3267)`
+- allow creation form to be pre-filled with a coupon `(PR #3228)`
+- seat-purchase notifications for Slack `(PR #3225)`
+- allow Stripe promo codes only, enforce short-lived promo codes `(PR #3222)`
+- enforce SAMS client per instance, add instance debug UI `(PR #3138)`
+- metric for Cloudflare interactions, include status from Stripe `(PR #3117)`
+- internal admin trigger to reconcile workspace `(PR #3070)`
+- add instance class option for employees `(PR #3067)`
+- write snapshot of workspace details into BigQuery on creation `(PR #3049)`
+- plans page refresh `(PR #2983)`
+
+#### Workspaces/Billing
+
+- add workspace name/displayname to Slack notifications `(PR #3262)`
+- include total seats in slack notification `(PR #3260)`
+
+#### Workspaces/Slack
+
+- make coupons links, refactor slack formatting `(PR #3332)`
+
+#### Workspaces/Telemetry
+
+- add marketingtracking and anonymous UID `(PR #3343)`
+
+#### Others
+
+- prompt templates editor supports @ current mentions `(PR #3397)`
+ - feat: prompt templates editor supports dynamic @ mentions
+- add event for codeintel highlights `(PR #3261)`
+- Add support for VoyageAI reranker `(PR #3155)`
+
+### Fix
+
+#### Agents
+
+- redirect to settings page after creating agents `(PR #3471)`
+ - Creating a new agent now redirects to the settings page for further setup
+- correct typo in system prompt `(PR #3421)`
+- disable Agents unless feature flag is enabled `(PR #3420)`
+- show diagnostics on review page `(PR #3358)`
+- fix bug where review agent posted comments on wrong lines `(PR #3347)`
+- Add fallback when rules list is empty (CODY-4834) `(PR #3285)`
+- use globs instead of regexp for include/exclude filters `(PR #3277)`
+ - Rule include/exclude patterns are now interpreted as globs (`*.go`) instead of regexp (`.*\.go`). Negative include patterns like `!*.go` will be interpreted as exclude patterns, and vice versa.
+- remove agents from the navbar on dotcom `(PR #3166)`
+
+#### Auth
+
+- Add missing allowSignup option to HTTP header auth provider `(PR #3232)`
+ - fix/auth: "http-header" auth provider can "allowSignup": false to disable automatic account creation
+
+#### Ci
+
+- disable puppeteer browser tests `(PR #3298)`
+- disable client checks `(PR #3290)`
+- update licenses script and rerun it `(PR #3053)`
+
+#### Code Intelligence
+
+- Optimize GetSymbolUsages core query `(PR #3035)`
+- Decorrelate subquery for hover docs `(PR #3031)`
+- Bound number of docs read for GetHover `(PR #3000)`
+
+#### Cody
+
+- include token usage in OpenAI streaming requests `(PR #3441)`
+- Check that completion exists `(PR #3410)`
+- include token usage in OpenAI streaming requests `(PR #3376)`
+- use max_completion_tokens field for OpenAI `(PR #3362)`
+- Pull correct client-name param `(PR #3167)`
+- show actual chat quota usage for free-tier users `(PR #2970)`
+- filter allowed models by capability for PLG users `(PR #2506)`
+
+#### Cody-Gateway
+
+- v1/limits handler `(PR #2908)`
+
+#### Dev
+
+- internal/memcmd: add support for using mise in tests `(PR #3256)`
+
+#### Local
+
+- set GOWORK=off for frontend in sg `(PR #3111)`
+
+#### Models
+
+- update OpenAI o3-mini model pricing `(PR #3303)`
+
+#### Msp/Iam
+
+- grant Cloud Deploy executor access to images `(PR #3175)`
+
+#### Msp/Runtime
+
+- fix pgx pool stats metric `(PR #3126)`
+
+#### Multi Tenant
+
+- bring back description text to the github add account setup step `(PR #3325)`
+- improve workspace validation states `(PR #3093)`
+
+#### Multitenant/Mt-Router
+
+- fix up routes and redirect `(PR #3195)`
+
+#### Release
+
+- fix migrator update check `(PR #3173)`
+ - fix(rel): fix migrator upgrade check
+
+#### Search
+
+- Change chevrons in in-file search panel `(PR #3367)`
+- Web app broken if settings contains 'message' key `(PR #3363)`
+- Bust cache for new logo (favicon) `(PR #3350)`
+- Use separate light and dark SG logo variants `(PR #3280)`
+- Link to correct dashboard page in extensions CTA `(PR #3182)`
+- Fix search aggregation chart popover `(PR #3046)`
+- Chat tips modal covered by file sidebar `(PR #3016)`
+
+#### Source
+
+- Fix Gerrit clone URL resolution `(PR #3446)`
+- Fix inability to update Gerrit code host config URL `(PR #3361)`
+ - Fix bug where the URL of a Gerrit code host connection could not be updated.
+- web: ensure list of external accounts has unique entry for each key `(PR #3323)`
+ - A bug on the user's account security page that could result in duplicated / buggy entries has been fixed.
+- RepoSource.BitbucketServer.CloneURLToRepoName(): support more URL shapes `(PR #3224)`
+ - The logic that translates Bitbucket clone URLs to repository names has been fixed to support:
+ - URLs that have no scheme (like `"bitbucket.sgdev.org/sourcegraph/sourcegraph"`)
+ - SSH clone urls that don't have a `ssh://` scheme prefix (like `git@bitbucket.sgdev.org:sourcegraph/sourcegraph.git`)
+- gitserver: merge base: add explicit test to ensure ordering of RevisionNotFoundErrors `(PR #2779)`
+
+#### Tenant/Repositories
+
+- do not show search text if empty `(PR #3321)`
+
+#### Ui
+
+- Display Revision not found instead of Empty repo `(PR #3235)`
+
+#### Web
+
+- add missing separator under organizations `(PR #3253)`
+
+#### Workspaces
+
+- correctly apply management retries `(PR #3384)`
+- spread out routerreconciler workload more `(PR #3377)`
+- reduce frequency of custom input telemetry event `(PR #3342)`
+- apply coupon when estimating price on existing subscription `(PR #3331)`
+- tweak plan page copy again `(PR #3328)`
+- use display name on join page `(PR #3326)`
+- drill coupon from homepage, make sure coupon is applied immediately `(PR #3312)`
+- more sentence-casing fixes `(PR #3255)`
+- fix capitalization in creation form `(PR #3218)`
+- hide open-invites toggle when email domain is not allowed `(PR #3213)`
+- price is monthly `(PR #3197)`
+- fix name validation `(PR #3165)`
+- include class in unseen instance counts, remove unseen instances from normal counts `(PR #3134)`
+- log check-name errors as unexpected errors `(PR #3075)`
+- test that post-normalization bad words are caught `(PR #3047)`
+
+#### Workspaces/At-Capacity
+
+- capitalize error message `(PR #3381)`
+
+#### Workspaces/Metrics
+
+- report all workspace/instance states `(PR #3297)`
+
+#### Others
+
+- Color removed from filter sidebar in search `(PR #3399)`
+- change workspace icon in profile menu `(PR #3241)`
+- Fix yaml file `(PR #3183)`
+- index matches safely `(PR #3168)`
+- server checks reindex at 5.10 now `(PR #2881)`
+ - single docker server checks for 5.10-reindex.completed instead of 5.1-reindex.completed
+
+### Chore
+
+#### Agents
+
+- Basic Telemetry for Code Review [CODY-4903] `(PR #3389)`
+- [CODY-4830] Update Conversation and Review store patterns `(PR #3259)`
+- remove typebox for runtime type validation `(PR #3214)`
+
+#### Ci
+
+- migrate rules_oci to MODULE.bazel `(PR #3352)`
+- fixes gazelle issues `(PR #3296)`
+- migrate rules_go and gazelle to bazel mod `(PR #1716)`
+
+#### Cloud
+
+- update cloud-mi2 wolfi base image `(PR #3199)`
+
+#### Code Intelligence
+
+- Make scip-syntax parallelism configurable `(PR #3206)`
+- Use clearer names & error propagation in SCIW initialization `(PR #2997)`
+
+#### Deps
+
+- upgrade sourcegraph-accounts-sdk-go `(PR #3142)`
+
+#### Dev
+
+- Upgrade to pnpm v9.15.4 `(PR #3266)`
+- Delete go.mod for monitoring/ directory `(PR #2951)`
+
+#### Modelconfig
+
+- remove spammy logs `(PR #3452)`
+
+#### Msp/Cloudsql
+
+- remove pgx.Acquire span `(PR #3141)`
+
+#### Release
+
+- update src-cli dependencies for 6.0.0 release `(PR #3186)`
+ - Release src-cli 6.0.0
+
+#### Search
+
+- Permanently enable the new web app on dotcom `(PR #3448)`
+- Remove new branding branching logic `(PR #3270)`
+
+#### Security
+
+- Auto-update all packages in Sourcegraph base images `(PR #3482)`
+- Auto-update all packages in Sourcegraph base images `(PR #3239)`
+
+#### Workspaces
+
+- Remove cody upsell in workspace creation `(PR #3391)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+- Update ES Cody limits to 2x Enterprise `(PR #3069)`
+- make user metadata available on userauth.UserInfo `(PR #3051)`
+
+#### Workspaces/Blocklists
+
+- add 'unconfigured' to blocked names `(PR #3043)`
+
+#### Workspaces/Web
+
+- rename 'quickJoin' vars to 'openInvite' `(PR #2974)`
+
+#### Others
+
+- update list of allowed headers for untrusted clients `(PR #3486)`
+- add events for hoverables `(PR #3433)`
+- update third-party licenses `(PR #3414)`
+- Output formatting updates based on feedback `(PR #3349)`
+- Entitle URL updates `(PR #3345)`
+- Add more doc comments for file checker types `(PR #3307)`
+- update third-party licenses `(PR #3273)`
+- fix integration tests `(PR #3263)`
+- Update teams.yml for product platform changes `(PR #3252)`
+- Remove orphaned modules / import to non-exisiting module `(PR #3219)`
+- Drop hubspot logging from non-dotcom auth methods `(PR #3132)`
+- update third-party licenses `(PR #3095)`
+- Remove gorilla/context `(PR #3089)`
+- Layout finetuning of Creating Workspaces and Tenant Onboarding `(PR #3065)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+
+### Reverts
+
+- Revert: use max_completion_tokens field for OpenAI and the inclusion of token usage in OpenAI streaming requests `(PR #-1)`
+- Revert "Omnibox: route likely code generation commands to Chat (#2969)" `(PR #2969)`
+
+### Uncategorized
+
+#### Others
+
+- Update Cody Web v0.31.0 `(PR #3474)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+- gateway: Bump limits for ES `(PR #3409)`
+- gomod: update Zoekt for file language fix `(PR #3408)`
+- msp: use TFC robot email for IAP `(PR #3400)`
+- Alexjean baptiste cody 4813 azure gpt model enum update or override `(PR #3380)`
+- web: Make CTA for workspaces correctly reload page for marketing content `(PR #3341)`
+- Fix path to pt to updated `useObservables` family of hooks `(PR #3322)`
+- Removing DeepSeek V3 model `(PR #3309)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+- workspaces: Never show wallet payments `(PR #3304)`
+- sg: Extend the mise check to early-exit if env var is set `(PR #3302)`
+- refactor(cody-gateway) Remove support for OpenAI o1-mini model (CODY-4839) `(PR #3295)`
+ - Remove support for OpenAI o1-mini model.
+- refactor(cody-gateway) Deprecate Gemini 1.5 Flash, Claude 3 Opus, Claude 3 Haiku, and Mixtral 8x7B (CODY-4839) `(PR #3293)`
+ - Deprecate Gemini 1.5 Flash, Claude 3 Opus, Claude 3 Haiku, and Mixtral 8x7B
+- gomod: bump Zoekt after package restructuring `(PR #3271)`
+- Adding O3 mini model to OpenAI `(PR #3254)`
+ - Adding O3 mini model to OpenAI `(PR #3254)`
+- telemetry: add ClientFeature for more granular reporting of search events `(PR #3229)`
+- Search: enable scip-ctags for C `(PR #3215)`
+- cleanup: simplify search observables and switch to fetch-event-source `(PR #3196)`
+- completions: improve prometheus metrics for code/chat completions `(PR #3181)`
+- partially fix erroneous `svelte-check` errors in local dev `(PR #3180)`
+- Adding DeepSeek V3 support through fireworks `(PR #3170)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+- lib/marketingtracking: publish module for SAMS to consume `(PR #3169)`
+- web: Some more user menu polish `(PR #3140)`
+- remove builtin rules from code review agent `(PR #3086)`
+- workaround for `sg start minimal-sveltekit` and other "minimal" entrypoints `(PR #3085)`
+- doc/workspaces: fix link to chargeback playbook `(PR #3076)`
+- generate TypeScript types and runtime type validators for Agent API `(PR #3055)`
+- Update sg setup for employees `(PR #2990)`
+- chore(Workspaces) Fix repo size increment from 500mb to 1 gig `(PR #2977)`
+- feat(agents) Conversations API DB Tables [CODY-4683] `(PR #2964)`
+ - Adds `agent_conversations` and `agent_conversation_messages` tables as well as `ConversationStore`
+- Replace o1 preview model with o1 `(PR #2924)`
+ - OPTIONAL; info at [https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c](https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c)
+- feature(codeintel): add syntax highlighting for Svelte `(PR #2690)`
+ - Added syntax highlighting for `.svelte` files
+- mt-router: allow passthrough for instance health check endpoint `(PR #2689)`
+
+### Untracked
+
+The following PRs were merged onto the previous release branch but could not be automatically mapped to a corresponding commit in this release:
+
+- Fix Gerrit clone URL resolution (#3446) `(PR #3449)`
+- Fix inability to update Gerrit code host config URL (#3361) `(PR #3439)`
+ - Fix bug where the URL of a Gerrit code host connection could not be updated.(cherry picked from commit e6da1ceb9586bf109339f06220c1fdbbf570a6d9)
+- prompt templates editor supports @ current mentions (#3397) `(PR #3436)`
+ - feat: prompt templates editor supports dynamic @ mentionsPR description tips: [https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e](https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e)
+- Roll out new Gemini Models (#3357) `(PR #3412)`
+ - Move Gemini 2.0 Flash from Experimental to GA, add Gemini 2.0 Flash-Lite Preview Experimental and Gemini 2.0 Pro ExperimentalCo-authored-by: arafatkatze [arafat.da.khan@gmail.com](mailto:arafat.da.khan@gmail.com)
+(cherry picked from commit 4aa5aa41cc0f5e2be80c77e8e8709e198ff54b94)PR description tips: [https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e](https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e)
+- Backport 3254 to 6.0.x `(PR #3300)`
+- [backport] fix/web: add missing separator under organizations (#3253) `(PR #3282)`
+- Auto-update all packages in Sourcegraph base images `(PR #3237)`
+- [backport] feat/tenant: fix workspace management UI quirks (#3227) `(PR #3230)`
+- [backport] feat/web: Update cloning status to new designs (#2760) `(PR #3158)`
+- Layout finetuning of Creating Workspaces and Tenant Onboarding (#3065) `(PR #3139)`
+Revert "Fix: Buildkite Pipeline generates images with specific cloud tags for S2 deployments" `(PR #2985)`
+Revert "fix: set the tag in the push_all.sh script" `(PR #2989)`
+- Fix: Buildkite Pipeline generates images with specific cloud tags for S2 deployments `(PR #2985)`
+ - NA - no customer facing changes
+
+{/* RSS={"version":"v6.1.0", "releasedAt": "2025-02-17"} */}
+
+
+# 6.0 Patch 1
+
+## v6.0.2687
+
+- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v6.0.2687)
+
+- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v6.0.2687)
+
+- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v6.0.2687)
+
+- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v6.0.2687)
+
+### Features
+
+#### Cody-Gateway
+
+- Roll out new Gemini Models (#3357) `(PR #3412)`
+ - Move Gemini 2.0 Flash from Experimental to GA, add Gemini 2.0 Flash-Lite Preview Experimental and Gemini 2.0 Pro ExperimentalCo-authored-by: arafatkatze [arafat.da.khan@gmail.com](mailto:arafat.da.khan@gmail.com)
+(cherry picked from commit 4aa5aa41cc0f5e2be80c77e8e8709e198ff54b94)PR description tips: [https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e](https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e)
+
+#### Multitenant
+
+- add support for alternate default tenant hostname `(PR #3176)`
+
+#### Security
+
+- Create Binary Authorization attestations when promoting images to public `(PR #3478)`
+
+#### Others
+
+- prompt templates editor supports @ current mentions (#3397) `(PR #3436)`
+ - feat: prompt templates editor supports dynamic @ mentionsPR description tips: [https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e](https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e)
+
+### Fix
+
+#### Multi Tenant
+
+- fixing paper cuts for mt marketing launch (#3242) `(PR #3281)`
+- misc wording fixes in workspaces onboarding `(PR #3207)`
+- fixes welcome message alert `(PR #3174)`
+
+#### Source
+
+- Fix Gerrit clone URL resolution (#3446) `(PR #3449)`
+- Fix inability to update Gerrit code host config URL (#3361) `(PR #3439)`
+ - Fix bug where the URL of a Gerrit code host connection could not be updated.(cherry picked from commit e6da1ceb9586bf109339f06220c1fdbbf570a6d9)
+- Normalize code host URLs during code host config unmarsha… `(PR #3438)`
+- gitserver: Unambiguously identify commit boundaries in git log `(PR #3411)`
+ - Commit listing should work correctly for repos which contain arbitrary characters in commit messages. This also affects downstream functionality such as commit graph updates needed for precise code navigation.Backport 2eae8e1 from #3359
+
+### Chore
+
+#### Security
+
+- Auto-update all packages in Sourcegraph base images `(PR #3237)`
+
+### Reverts
+
+ There were no reverts for this release
+
+### Uncategorized
+
+#### Others
+
+- [Backport 6.0.x] gitserver: Fix API endpoint for installation token creation `(PR #3432)`
+- [backport] chore/source: Update src-cli to 6.0.1 (#3365) `(PR #3416)`
+ - Update src-cli version to 6.0.1
+- [backport] fix/tenant: fix invitation expiry, user management UX improvements (#3353) `(PR #3356)`
+- [Backport 6.0.x] tenant: Also allow to add fork repos `(PR #3348)`
+- Backport 3254 to 6.0.x `(PR #3300)`
+- [backport] feat/prompts: support public prompts/saved-searches in workspaces using RBAC (#3257) `(PR #3294)`
+- [backport] fix/web: add missing separator under organizations (#3253) `(PR #3282)`
+- [backport] feat/tenant: fix workspace management UI quirks (#3227) `(PR #3230)`
+
+{/* RSS={"version":"v6.0.2687", "releasedAt": "2025-02-12"} */}
+
+
# 6.0 Patch 0
> Attention - Postgres 12 is no longer supported! If upgrading from Sourcegraph version 5.9 or earlier, this release will update our included database container images from Postgres 12 to Postgres 16.
@@ -2243,6 +2972,25 @@ The following PRs were merged onto the previous release branch but could not be
{/* RSS={"version":"v5.10.0", "releasedAt": "2024-11-27"} */}
+# 5.9 Patch 4
+
+## v5.9.17785
+
+- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v5.9.17785)
+
+- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v5.9.17785)
+
+- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v5.9.17785)
+
+- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v5.9.17785)
+
+### Fix
+
+#### Cody
+
+- Allow specification of additional chat GPT models for Cody `(PR #3434)`
+Backport 5190f43a4d09810e69400c5d0e6d9176b3c4b815 from #3380
+
# 5.9 Patch 3
## v5.9.1590
@@ -7211,6 +7959,9 @@ Currently supported versions of Sourcegraph:
| **Release** | **General Availability Date** | **Supported** | **Release Notes** | **Install** |
|--------------|-------------------------------|---------------|--------------------------------------------------------------------|------------------------------------------------------|
+| 6.1 Patch 1 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v61376) | [Install](https://sourcegraph.com/docs/admin/deploy) |
+| 6.1 Patch 0 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v610) | [Install](https://sourcegraph.com/docs/admin/deploy) |
+| 6.0 Patch 1 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v602687) | [Install](https://sourcegraph.com/docs/admin/deploy) |
| 6.0 Patch 0 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v600) | [Install](https://sourcegraph.com/docs/admin/deploy) |
| 5.11 Patch 5 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5116271) | [Install](https://sourcegraph.com/docs/admin/deploy) |
| 5.11 Patch 4 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5114013) | [Install](https://sourcegraph.com/docs/admin/deploy) |
@@ -7275,6 +8026,12 @@ These versions fall outside the release lifecycle and are not supported anymore:
This page displays the docs for legacy Sourcegraph versions less than 5.1
+
+
+ - [6.0](https://6.0.sourcegraph.com)
+
+
+
- [5.11](https://5.11.sourcegraph.com/docs)
@@ -7641,7 +8398,156 @@ Slack Support provides access to creating tickets directly from Slack, allowing
-
+
+# Sourcegraph Pricing Plan Comparison
+
+
This page lists a detailed comparison of the features available in each plan.
+
+| **Features** | **Free** | **Enterprise Starter** | **Enterprise** |
+| -------------------------------- | ----------------------------------------------------- | ----------------------------------------------------- | ----------------------------------------------------- |
+| **AI** | | | |
+| Autocomplete | Unlimited | Unlimited | Unlimited |
+| Chat messages and prompts | 200/month | Increased limits | Unlimited |
+| Code context and personalization | Local codebase | Remote codebase (GitHub only) | Remote, enterprise-scale codebases |
+| Integrated search results | - | ✓ | ✓ |
+| Prompt Library | ✓ | ✓ | ✓ |
+| Bring your own LLM Key | - | - | Self-Hosted only |
+| Auto-edit | - | Experimental | Experimental |
+| Aentic chat experience | - | Experimental | Experimental |
+| **Code Search** | | | |
+| Code Search | - | ✓ | ✓ |
+| Code Navigation | - | ✓ | ✓ |
+| Code Insights | - | - | ✓ |
+| Code Monitoring | - | - | ✓ |
+| Batch Changes | - | - | ✓ |
+| **Deployment** | | | |
+| Cloud deployment | Multi-tenant | Multi-tenant | Single tenant |
+| Self hosted option | - | - | ✓ |
+| Private workspace | - | ✓ | ✓ |
+| **Admin and Security** | | | |
+| SSO/SAML | Basic (GH/GL/Google) | Basic (GH/GL/Google) | ✓ |
+| Role-based access control | - | - | ✓ |
+| Analytics | - | Basic | ✓ |
+| Audit logs | - | - | ✓ |
+| Guardrails | - | - | Beta |
+| Indexed code | - | Private | Private |
+| Context Filters | - | - | ✓ |
+| **Compatibility** | | | |
+| Code hosts | Local codebase | GitHub | All major codehosts |
+| IDEs | VS Code, JetBrains IDEs, Visual Studio (Experimental) | VS Code, JetBrains IDEs, Visual Studio (Experimental) | VS Code, JetBrains IDEs, Visual Studio (Experimental) |
+| Human languages | Many human languages, dependent on the LLM used | Many human languages, dependent on the LLM used | Many human languages, dependent on the LLM used |
+| Programming languages | All popular programming languages | All popular programming languages | All popular programming languages |
+| **Support** | | | |
+| Support level | Community support | Community support | Enterprise support |
+| Dedicated TA support | - | - | Add-on |
+| Premium support | - | - | Add-on |
+
+
+
+
+# Pricing and Billing FAQs
+
+
Learn about billing and pricing FAQs for Sourcegraph plans.
+
+## What's the difference between Free, Enterprise Starter, and Enterprise plans?
+
+Free is best for individuals working on hobby projects.
+
+Enterprise Starter is for growing organizations who want Sourcegraph's AI & search experience hosted on our cloud.
+
+Enterprise is for organizations that want AI and search across the SDLC with enterprise-level security, scalability, and flexible deployment.
+
+## How are autocompletions counted for the Cody Free plan?
+
+Cody autocompletions are counted based on the number of suggestions served to the user in their IDE as ghost text. This includes all suggestions on whether or not the user accepts them.
+
+## How does Sourcegraph's context and personalization work?
+
+Cody can retrieve codebase context to personalize responses in several ways. For Free and Pro users, context is retrieved from their local repositories.
+
+The Enterprise Starter and Enterprise plans use Sourcegraph's search backend to retrieve context. This method pulls context from a team's full codebase at any scale.
+
+## What forms of support are available for paid plans?
+
+Email and web portal support is available to both Enterprise Starter and Enterprise customers, and you can [read more about our SLAs](/sla). Premium support with enhanced SLAs is also available as an add-on for Enterprise customers.
+
+## Can I upgrade or downgrade my plan?
+
+Users can upgrade or downgrade between Free and Pro within their account settings anytime. For Pro users, upgrading to Enterprise Starter is also possible, but doing so currently does not cancel your subscription, and you must cancel it yourself.
+
+To upgrade to Enterprise, please [contact our Sales team](https://sourcegraph.com/contact/request-info).
+
+## What's the difference between "flexible LLM options" and "bring your own LLM key"?
+
+Flexible LLM options: Users can select from multiple options to use for chat.
+
+Bring your own LLM key: Enterprise customers can optionally provide their own LLM API key for supported LLMs (including for LLM services such as Azure OpenAI and Amazon Bedrock). In this scenario, customers pay for their own LLM consumption, and we will provide a pricing discount with your plan.
+
+## Does Sourcegraph use my code to improve the models used by other people?
+
+For Enterprise and Enterprise Starter customers, Sourcegraph will not train on your company's data unless your instance admin enables fine-tuning, which would customize an existing model exclusively for your use.
+
+For Free and Pro users, Sourcegraph may use your data to fine-tune the model you are accessing unless you disable this feature.
+
+## Can Sourcegraph be run fully self-hosted?
+
+Sourcegraph requires cloud-based services to power its AI features. For customers looking for a fully self-hosted or air-gapped solution, please [contact us](https://sourcegraph.com/contact/request-info).
+
+## Is an annual contract required for any of the plans?
+
+Pro and Enterprise Starter plans are billed monthly and can be paid with a credit card.
+
+## How are active users counted and billed?
+
+This only applies to Enterprise contracts. Pro and Enterprise Starter users pay for a seat every month, regardless of usage.
+
+A billable user is one who is signed in to their Enterprise account and actively interacts with the product (e.g., they see suggested autocompletions, run commands or chat with Cody, start new discussions, clear chat history, or copy text from chats, change settings, and more). Simply having Cody installed is not enough to be considered a billable user.
+
+## Is my data secure when connected to Code Search?
+
+Sourcegraph has security and reliability controls built for the most demanding enterprises. To learn more, see our [Security page](https://sourcegraph.netlify.app/security).
+
+## What if I want AI or Code Search Enterprise only?
+
+You can purchase Enterprise plans for AI or Code Search only. [Contact us](https://sourcegraph.com/contact/request-info) to learn more.
+
+## Which code hosts is the Enterprise Starter plan compatible with?
+
+The Enterprise Starter plan is currently compatible with GitHub. Its limit for indexing is 100 repositories for search and context.
+
+## What are the limits of the Enterprise starter plan?
+
+The Enterprise Starter plan supports up to 50 developers and, alongside a limit of 100 repositories for search and context, also includes 5Gb of storage. Adding additional seats gives you 1GB of additional storage per seat, for a maximum total of 10GB.
+
+## Billing FAQs for Enterprise Starter
+
+## How do I cancel subscription renewal?
+
+In the **Workspace settings > Billing** page, you can cancel the subscription and continue having access to your workspace until the end of your current billing period (that is indicated in the UI).
+
+## How do I cancel subscription and delete my workspace immediately?
+
+On the **Workspace settings > General settings** page, you can delete your workspace. This will immediately remove access and cancel your subscription.
+
+## How is the subscription renewal dates determined?
+
+Your subscription renewals are scheduled to happen on the same day of the month. On shorter months (e.g., day 31 on April, which only has 30 days), the last day of the month will be the subscription renewal day instead.
+
+## How do I pay my invoice if my subscription is past due?
+
+After updating or resolving your payment method issue that occurred during the automatic subscription renewal, you may do one of the following to pay the invoice for your past-due subscription:
+
+1. Wait for our system to re-attempt the charge for the invoice. It usually takes up to 24 hours. If it does not happen after 24 hours, please contact [Support](support@sourcegraph.com) to resolve the issue.
+1. In the **Workspace settings > Billing** page, click the **View invoices** button, which takes you to the Stripe Customer Portal. Then, pay the invoice there. Our system will reconcile your payment within 24 hours. If it does not reconcile after 24 hours, please contact [Support](support@sourcegraph.com) to resolve the issue.
+1. Contact [Support](support@sourcegraph.com) to request re-attempt the charge for the invoice using the payment method on file.
+
+## Are there any refunds for the subscription?
+
+We don't offer refunds, but if you have any queries regarding the Enterprise Starter, please write to support@sourcegraph.com, and we'll help resolve the issue.
+
+
+
+
# Free
Learn about the Sourcegraph's Free plan and the features included.
@@ -7654,10 +8560,9 @@ The Free plan includes the following features:
| **AI features** | **Compatibility** | **Deployment** | **Admin/Security** | **Support** |
| ----------------------------------------------------------------------------- | --------------------------------------------------- | ------------------ | ------------------------------------------ | ---------------------- |
-| Reasonable use autocomplete limits | VS Code, JetBrains IDEs, Visual Studio, and Eclipse | Multi-tenant Cloud | SSO/SAML with basic GitHub, GitLab, Google | Community support only |
+| Reasonable use autocomplete limits | VS Code, JetBrains IDEs, and Visual Studio | Multi-tenant Cloud | SSO/SAML with basic GitHub, GitLab, Google | Community support only |
| Reasonable use chat messages and prompts per month | All popular coding languages | - | - | - |
-| Multiple LLM selection (Claude 3.5 Sonnet, Gemini 1.5 Pro and Flash, Mixtral) | Natural language search | - | - | - |
-| Support for local Ollama models | All major codehosts (GitHub, GitLab, Bitbucket) | - | - | - |
+| Multiple LLM selection (Claude 3.5 Sonnet, Gemini 1.5 Pro and Flash) | Natural language search | - | - | - |
## Pricing and billing cycle
@@ -7671,10 +8576,10 @@ The Enterprise Starter plan provides extended usage limits and advanced features
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Description** | - AI editor assistant for hobbyists or light usage | - AI and search for growing organizations hosted on our cloud |
| **Price** | - $0/month - 1 user | - $19/user/month - Up to 50 devs |
-| **AI features** | - Autocompletions - 200 chat messages and prompts per month - Multiple LLM choices for chat - Connect to local Ollama models | - Code autocomplete and chat - More powerful LLMs for chat (GPT-4o, Claude 3 Opus) - Intent detection and integrated search results |
+| **AI features** | - Autocompletions - 200 chat messages and prompts per month - Multiple LLM choices for chat | - Code autocomplete and chat - More powerful LLMs for chat (GPT-4o, Claude 3 Opus) - Integrated search results |
| **Code Search features** | N/A | - Code Search - Symbol Search |
| **Deployment types** | - Multi-tenant Coud | - Multi-tenant Cloud - Private Workspace - Privately indexed code (100 repos) |
-| **Compatibility** | - VS Code, JetBrains IDEs, Visual Studio, and Eclipse - All popular coding languages Natural language search - All major code hosts | - VS Code, JetBrains IDEs, Visual Studio, and Eclipse - All popular coding languages Natural language search - Code hosted on GitHub |
+| **Compatibility** | - VS Code, JetBrains IDEs, and Visual Studio - All popular coding languages Natural language search - All major code hosts | - VS Code, JetBrains IDEs, and Visual Studio - All popular coding languages Natural language search - Code hosted on GitHub |
| **Support** | - Community support only | - 9x5 Support |
## Moving to Enterprise Starter plan
@@ -7685,7 +8590,7 @@ Click the **Create workspace** button to navigate to the payment page. Here, you
-
+
# Enterprise
Learn about the Sourcegraph's Enterprise plan and the features included.
@@ -7708,12 +8613,12 @@ Here's a detailed breakdown of features included in the different Enterprise pla
-
+
# Enterprise Starter
Learn about the Enterprise Starter plan tailored for individuals and teams wanting private code indexing and search to leverage the Sourcegraph platform better.
-The Enterprise Starter plan offers a multi-tenant Sourcegraph instance designed for individuals and teams. It provides the core features of a traditional Sourcegraph instance but with a simplified management experience. This plan provides a fully managed version of Sourcegraph (AI + code search + intent detection with integrated search results, with privately indexed code) through a self-serve flow.
+The Enterprise Starter plan offers a multi-tenant Sourcegraph instance designed for individuals and teams. It provides the core features of a traditional Sourcegraph instance but with a simplified management experience. This plan provides a fully managed version of Sourcegraph (AI + code search with integrated search results, with privately indexed code) through a self-serve flow.
## Team seats
@@ -7732,18 +8637,18 @@ Workspaces on the Enterprise Starter plan are billed monthly based on the number
If you fail to make the payment after the grace period, your workspace will be deleted, and you will not be able to recover your data.
-Please also see [Billing FAQs](billing-faqs.mdx).
+Please also see [Billing FAQs](billing-faqs.mdx) for more FAQs, including how to downgrade Enterprise Starter.
## Features supported
The Enterprise Starter plan supports a variety of AI and search-based features like:
-| **AI features** | **Code Search** | **Management** | **Support** |
-| ------------------------------------------------ | ------------------------------ | --------------------------------------------------------- | ------------------------- |
-| Code autocompletions and chat messages | Indexed Code Search | Simplified admin experience with UI-based repo-management | Support with limited SLAs |
-| Powerful LLM models for chat | Indexed Symbol Search | User management | - |
-| Intent detection with integrated search results | Searched based code-navigation | GitHub code host integration | - |
-| Cody integration | - | - | - |
+| **AI features** | **Code Search** | **Management** | **Support** |
+| -------------------------------------- | ------------------------------ | --------------------------------------------------------- | ------------------------- |
+| Code autocompletions and chat messages | Indexed Code Search | Simplified admin experience with UI-based repo-management | Support with limited SLAs |
+| Powerful LLM models for chat | Indexed Symbol Search | User management | - |
+| Integrated search results | Searched based code-navigation | GitHub code host integration | - |
+| Cody integration | - | - | - |
## Limits
@@ -7799,40 +8704,6 @@ As you add more repos, you get logs for the number of repos added, storage used,

-## Downgrading Enterprise Starter
-
-To downgrade your Enterprise Starter, there are two options:
-
-- Delete the workspace — this cancels the subscription, and you lose access to the workspace
-- Cancel the workspace subscription from the **Billing** settings. This allows you to access the workspace until the end of the current billing period
-
-
-
-
-# Billing FAQs for Enterprise Starter
-
-
Learn about the billing for the Enterprise Starter plan.
-
-## How do I cancel subscription renewal?
-
-In the **Workspace settings > Billing** page, you are able to cancel the subscription and continue having access to your workspace until the end of your current billing period (that is indicated in the UI).
-
-## How do I cancel subscription and delete my workspace immediately?
-
-In the **Workspace settings > General settings** page, you are able to delete your workspace and you will lose access immediately. This action will also cancel your subscription immediately.
-
-## How is the subscription renewal dates determined?
-
-Your subscription renewals are scheduled to happen at the same day of the month. On shorter months (e.g. day 31 on April that only has 30 days), the last day of the month will be the subscription renewal day instead.
-
-## How do I pay my invoice if my subscription is past due?
-
-After updating or resolving your payment method issue that occurred during the automatic subscription renewal, you may do one of the followings to pay the invoice for your past-due subscription:
-
-1. Wait for our system to re-attempt the charge for the invoice, it usually takes up to 24 hours. If it does not happen after 24 hours, please contact [Support](support@sourcegraph.com) to resolve the issue.
-1. In the **Workspace settings > Billing** page, click on the **Manage payments** button, which takes you to the Stripe Customer Portal, and pay the invoice there. Our system will reconcile your payment within 24 hours, if it does not reconcile after 24 hours, please contact [Support](support@sourcegraph.com) to resolve the issue.
-1. Contact [Support](support@sourcegraph.com) to request re-attempt the charge for the invoice using the payment method on file.
-
@@ -10112,7 +10983,7 @@ In addition to searching your organization’s private code, you can use Sourceg
### Search syntax
-Sourcegraph offers [structural search](/code-search/types/structural), and GitHub code search does not offer this search method. Structural search lets you match richer syntax patterns, specifically in code and structured data formats like JSON. Sourcegraph offers structural search on indexed code and uses [Comby syntax](https://comby.dev/docs/syntax-reference) for structural matching of code blocks or nested expressions. For example, the `fmt.Sprintf` function is a popular print function in Go. [Here](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+fmt.Sprintf%28...%29&patternType=structural&_ga=2.204781593.827352295.1667227568-1057140468.1661198534&_gac=1.118615675.1665776224.CjwKCAjwkaSaBhA4EiwALBgQaJCOc6GlhIDQyg6HQScgfSBQpoFTUf7T_NNqEX5JaobtCS08GUEJuRoCIlIQAvD_BwE&_gl=1*1r2u5zs*_ga*MTA1NzE0MDQ2OC4xNjYxMTk4NTM0*_ga_E82CCDYYS1*MTY2NzUwODExNC4xMTQuMS4xNjY3NTA5NjUyLjAuMC4w) is a pattern that matches all of the arguments in `fmt.Sprintf` in our code using structural search compared to the [search](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+fmt.Sprintf%28...%29&patternType=regexp) using regex.
+Sourcegraph offers [structural search](/code-search/types/structural), and GitHub code search does not offer this search method. Structural search lets you match richer syntax patterns, specifically in code and structured data formats like JSON. Sourcegraph offers structural search on indexed code and uses [Comby syntax](https://comby.dev/docs/syntax-reference) for structural matching of code blocks or nested expressions. For example, the `fmt.Sprintf` function is a popular print function in Go. [Here](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/about%24+fmt.Sprintf%28...%29&patternType=structural&_ga=2.204781593.827352295.1667227568-1057140468.1661198534&_gac=1.118615675.1665776224.CjwKCAjwkaSaBhA4EiwALBgQaJCOc6GlhIDQyg6HQScgfSBQpoFTUf7T_NNqEX5JaobtCS08GUEJuRoCIlIQAvD_BwE&_gl=1*1r2u5zs*_ga*MTA1NzE0MDQ2OC4xNjYxMTk4NTM0*_ga_E82CCDYYS1*MTY2NzUwODExNC4xMTQuMS4xNjY3NTA5NjUyLjAuMC4w) is a pattern that matches all of the arguments in `fmt.Sprintf` in our code using structural search compared to the [search](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/about%24+fmt.Sprintf%28...%29&patternType=regexp) using regex.
Both GitHub code search and Sourcegraph support regular expression and keyword search. [Regular expression](/code-search/queries#standard-search) helps you find code that matches a pattern (including classes of characters like letters, numbers, and whitespace) and can restrict the results to anchors like the start of a line, the end of a line, or word boundary. Keyword search matches on individual terms, supporting both literal searches and flexible keyword-style queries.
@@ -10526,24 +11397,12 @@ Once your Pro subscription is confirmed, click **My subscription** to manage and
## Pro
-Cody Pro, designed for individuals or small teams at **$9 per user per month**, offers an enhanced coding experience beyond the free plan. It provides unlimited autocompletion suggestions plus unlimited chat and prompts. It also uses local repository context to enhance Cody's understanding and accuracy.
-
-Cody Pro uses DeepSeek-Coder-V2 by default for autocomplete. Pro accounts also default to the Claude 3.5 Sonnet (New) model for chat and prompts, but users can switch to other LLM model choices with unlimited usage, including:
+Cody Pro, designed for individuals or small teams at **$9 per user per month**, offers an enhanced coding experience beyond the free plan. It provides unlimited autocompletion suggestions plus increased limits for chat and prompts. It also uses local repository context to enhance Cody's understanding and accuracy.
-- Claude Instant 1.2
-- Claude 2
-- Claude 3
-- ChatGPT 3.5 Turbo
-- GPT-4o
-- ChatGPT 4 Turbo Preview
-- Mixtral
-- Google Gemini 1.5 Pro
-- Google Gemini 1.5 Flash
+Cody Pro uses DeepSeek-Coder-V2 by default for autocomplete. Pro accounts also default to the Claude 3.5 Sonnet (New) model for chat and prompts, but users can switch to other LLM model choices. You can refer to the [supported LLM models](/cody/capabilities/supported-models) docs for more information.
Support for Cody Pro is available through our Support team via support@sourcegraph.com, ensuring prompt assistance and guidance.
-There will be high daily limits to catch bad actors and prevent abuse, but under most normal usage, Pro users won't experience these limits.
-
### Downgrading from Pro to Free
To revert back to Cody Free from Pro:
@@ -10589,9 +11448,15 @@ We don't offer refunds, but if you have any queries regarding the Cody Pro plan,
You can access your invoices via the [Cody Dashboard](https://sourcegraph.com/cody/manage) and clicking "Manage Subscription".
+## Enterprise Starter
+
+Cody Pro users can also switch to the Enterprise Starter plan for **$19 per user per month**. This plan includes all the features of Cody Pro plus a multi-tenant Sourcegraph instance with core features like a fully managed version of Sourcegraph (AI + code search + intent detection with integrated search results, with privately indexed code) through a self-serve flow.
+
+Read more about the [Enterprise Starter plan](/pricing/enterprise-starter).
+
## Enterprise
-Cody Enterprise is designed for enterprises prioritizing security and administrative controls. We offer either seat-based or token based pricing models, depending on what makes most sense for your organization. You get Claude 3 (Opus and Sonnet 3.5) as the default LLM models without extra cost. You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments.
+Cody Enterprise is designed for enterprises prioritizing security and administrative controls. We offer either seat-based or token based pricing models, depending on what makes most sense for your organization. You get Claude Haiku 3.5 and Claude Sonnet 3.5 as the default LLM models without extra cost. You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments.
Security features include SAML/SSO for enhanced authentication and guardrails to enforce coding standards. Cody Enterprise supports advanced Code Graph context and multi-code host context for a deeper understanding of codebases, especially in complex projects. With 24/5 enhanced support, Cody Enterprise ensures timely assistance.
@@ -10599,21 +11464,21 @@ Security features include SAML/SSO for enhanced authentication and guardrails to
The following table shows a high-level comparison of the three plans available on Cody.
-| **Features** | **Free** | **Pro** | **Enterprise** |
-| --------------------------------- | ---------------------------------------------------------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------- |
-| **Autocompletion suggestions** | Unlimited | Unlimited | Unlimited |
-| **Chat Executions** | 200 per month | Unlimited | Unlimited |
-| **Keyword Context (local code)** | Supported | Supported | Supported |
-| **Developer Limitations** | 1 developer | 1 developer | Scalable, consumption-based pricing |
-| **LLM Support** | [View latest](/cody/capabilities/supported-models) | [View latest](/cody/capabilities/supported-models) | [View latest](/cody/capabilities/supported-models) |
-| **Code Editor Support** | VS Code, JetBrains IDEs, Visual Studio (Preview) | VS Code, JetBrains IDEs, Visual Studio (Preview) | VS Code, JetBrains IDEs, Visual Studio (Preview) |
-| **Single-Tenant and Self Hosted** | N/A | N/A | Yes |
-| **SAML/SSO** | N/A | N/A | Yes |
-| **Guardrails** | N/A | N/A | Yes |
-| **Advanced Code Graph Context** | N/A | N/A | Included |
-| **Multi-Code Host Context** | N/A | N/A | Included |
-| **Discord Support** | Yes | Yes | Yes |
-| **24/5 Enhanced Support** | N/A | N/A | Yes |
+| **Features** | **Free** | **Pro** | **Enterprise Starter** | **Enterprise** |
+| --------------------------------- | ---------------------------------------------------------- | ----------------------------------------------------------------- | -------------------------------------------------- | -------------------------------------------------- |
+| **Autocompletion suggestions** | Unlimited | Unlimited | Unlimited | Unlimited |
+| **Chat Executions** | 200 per month | Increased limits | Increased limits | Unlimited |
+| **Keyword Context (local code)** | Supported | Supported | Supported | Supported |
+| **Developer Limitations** | 1 developer | 1 developer | Scalable, per-seat pricing | Scalable, consumption-based pricing |
+| **LLM Support** | [View latest](/cody/capabilities/supported-models) | [View latest](/cody/capabilities/supported-models) | [View latest](/cody/capabilities/supported-models) | [View latest](/cody/capabilities/supported-models) |
+| **Code Editor Support** | VS Code, JetBrains IDEs, Visual Studio (Preview) | VS Code, JetBrains IDEs, Visual Studio (Preview) | VS Code, JetBrains IDEs, Visual Studio (Preview) | VS Code, JetBrains IDEs, Visual Studio (Preview) |
+| **Single-Tenant and Self Hosted** | N/A | N/A | N/A | Yes |
+| **SAML/SSO** | N/A | N/A | N/A | Yes |
+| **Guardrails** | N/A | N/A | N/A | Yes |
+| **Advanced Code Graph Context** | N/A | N/A | N/A | Included |
+| **Multi-Code Host Context** | N/A | N/A | GitHub only | Included |
+| **Discord Support** | Yes | Yes | Yes | Yes |
+| **24/5 Enhanced Support** | N/A | N/A | Yes | Yes |
@@ -10633,6 +11498,19 @@ If you're experiencing issues with Cody not responding in chat, follow these ste
- Ensure you have the latest version of the [Cody VS Code extension](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai). Use the VS Code command `Extensions: Check for Extension Updates` to verify
- Check the VS Code error console for relevant error messages. To open it, run the VS Code command `Developer: Toggle Developer Tools` and then look in the `Console` for relevant messages
+### Cody responses/completions are slow
+
+If you're experiencing issues with Cody's responses or completions being too slow:
+
+- Ensure you have the latest version of the [Cody VS Code extension](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai). Use the VS Code command `Extensions: Check for Extension Updates` to verify
+- Enable verbose logging, restart the extension and reproduce the issue seen again (see below `Access Cody logs` for how to do this
+- Send information to the our Support Team at support@sourcegraph.com
+
+Some additional information that will be valuable:
+- Where are you located? Any proxies or firewalls in use?
+- Does this happen with multiple providers/models? Which models have you used?
+
+
### Access Cody logs
VS Code logs can be accessed via the **Outputs** view. You will need to set Cody to verbose mode to ensure important information to debug is on the logs. To do so:
@@ -10691,7 +11569,7 @@ To troubleshoot further:
Cody Free provides **unlimited autocomplete suggestions** and **200 chat invocations** per user per month.
-On Cody Pro and Enterprise plans, usage is unlimited but controlled by **Fair Usage**. This means that some users occasionally experience a limitation placed on their account. This limitation resets within 24 hours. If this issue persists, contact us through our [community forum](https://community.sourcegraph.com), Discord, or email support@sourcegraph.com.
+On Cody Pro and Enterprise plans, usage limits are increased, and controlled by **Fair Usage**. This means that some users occasionally experience a limitation placed on their account. This limitation resets within 24 hours. If this issue persists, contact us through our [community forum](https://community.sourcegraph.com), Discord, or email support@sourcegraph.com.
#### 429 errors
@@ -10725,6 +11603,20 @@ It indicates that our web application firewall provider, Cloudflare, has flagged
Consider disabling anonymizers, VPNs, or open proxies. If using a VPN is essential, you can try [1.1.1.1](https://one.one.one.one), which is recognized to be compatible with our services.
+### Error with Cody `contextFilters` during chat or editing code
+
+The `contextFilters` setting in Cody is used to control which files are included or excluded when Cody searches for relevant context while answering questions or providing code assistance. Sometimes, you can see the following error:
+
+```
+Edit failed to run: file is ignored (due to cody.contextFilters Enterprise configuration setting)
+```
+
+This error occurs when you're trying to work with a file that's been excluded by Cody's enterprise-level `contextFilters` configuration. At times, this can happen to files that also haven't been excluded. First, check with your organization's admin to understand which files are excluded.
+
+If the error occurs with a file that's not been excluded, the workaround is to uninstall the Cody plugin, restart your IDE and then reinstall the latest version of the extension.
+
+This should clear the error.
+
### VS Code Pro License Issues
If VS Code prompts you to upgrade to Pro despite already having a Pro license, this usually happens because you're logged into a free Cody/Sourcegraph account rather than your Pro account. To fix this:
@@ -10848,7 +11740,7 @@ $filteredResults = preg_grep('*\.' . basename($inputPath) . '\.*', $fileList);
If you would like to add a forked repository as Cody context, you may need to add `"search.includeForks": true` to the [global settings](/admin/config/settings#editing-global-settings-for-site-admins) for your instance.
-## Eclipse extension
+{/* ## Eclipse extension
### See a white screen the first time you open Cody chat
@@ -10870,7 +11762,7 @@ You can open the Cody Log view using the same steps as above, but instead, selec
This will include more information about what Cody is doing, including any errors. There is a copy button at the top right of the log view that you can use to copy the log to your clipboard and send it to us. Be careful not to include any sensitive information, as the log communication is verbose and may contain tokens.
-Additionally, Eclipse's built-in Error Log can be used to view any uncaught exceptions and their stack traces. You can open the Error Log using the **Window > Show View > Error Log** menu.
+Additionally, Eclipse's built-in Error Log can be used to view any uncaught exceptions and their stack traces. You can open the Error Log using the **Window > Show View > Error Log** menu. */}
## OpenAI o1
@@ -10879,7 +11771,6 @@ Additionally, Eclipse's built-in Error Log can be used to view any uncaught exce
Symptoms:
- "Request Failed: Request to... failed with 500 Internal Server Error: context deadline exceeded"
-- Occurs with both o1-mini and o1-preview
- Happens even with relatively small inputs (~220 lines)
Solutions:
@@ -10905,8 +11796,6 @@ Symptoms:
Solutions:
-- For `o1-preview`: Copy the last line and ask to "continue from last line"
-- Switch to `o1-mini` for more reliable complete outputs
- Break down complex requests into smaller steps
- Consider using Sonnet 3.5 for tasks requiring longer outputs
@@ -10977,7 +11866,7 @@ To help you automate your key tasks in your development workflow, you get **[Pro
The Cody chat interface offers a few more options and settings. [You can read more in these docs](/cody/capabilities/chat).
-
+
## Chat with Cody
@@ -11125,9 +12014,15 @@ Cody leverages the `@-mention` syntax to source context via files, symbols, web
You can learn more about context [here](/cody/core-concepts/context).
+### Indexing your repositories for context
+@-mention local and current repositories are only available if you have your repository indexed. Enterprise and Enterprise Starter users can request their admins to add their local project for indexing to get access to @-mention context.
+
+Repository indexing is only available to supported [Code Hosts](https://sourcegraph.com/docs/admin/code_hosts), please reach out to your admins if you require assistance with indexing.
+
+
## Selecting the right LLM
-Cody offers a variety of LLMs for both chat and in-line edits by all the leading LLM providers. Each LLM has its strengths and weaknesses, so it is important to select the right one for your use case. For example, Claude 3.5 Sonnet and GPT-4o are powerful for code generation and provide accurate results. However, Gemini 1.5 Flash is a decent choice for cost-effective searches. So, you can always optimize your choice of LLM based on your use case.
+Cody offers a variety of LLMs for both chat and in-line edits by all the leading LLM providers. Each LLM has its strengths and weaknesses, so it is important to select the right one for your use case. For example, Claude 3.5 Sonnet and GPT-4o are powerful for code generation and provide accurate results. However, Gemini 2.0 Flash is a decent choice for cost-effective searches. So, you can always optimize your choice of LLM based on your use case.
Learn more about all the supported LLMs [here](/cody/capabilities/supported-models).
@@ -11341,7 +12236,7 @@ You can share all these prompt examples with your team members to help them get
Supported on all [Sourcegraph plans](https://about.sourcegraph.com/pricing).
- Available on VS Code, JetBrains, Visual Studio, Eclipse and the Web.
+ Available on VS Code, JetBrains, Visual Studio, and the Web.
@@ -11351,7 +12246,7 @@ Cody is an AI coding assistant that uses all the latest LLMs and your developmen
-Cody connects seamlessly with codehosts like [GitHub](https://github.com/login?client_id=e917b2b7fa9040e1edd4), [GitLab](https://gitlab.com/users/sign_in) and IDEs like [VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), [Visual Studio](/cody/clients/install-visual-studio), and [Eclipse](/cody/clients/install-eclipse). Once connected, Cody acts as your personal AI coding assistant, equipped with the following capabilities:
+Cody connects seamlessly with codehosts like [GitHub](https://github.com/login?client_id=e917b2b7fa9040e1edd4), [GitLab](https://gitlab.com/users/sign_in) and IDEs like [VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), and [Visual Studio](/cody/clients/install-visual-studio). Once connected, Cody acts as your personal AI coding assistant, equipped with the following capabilities:
1. Developer chat with the most powerful models and context
2. Code completions, code edits, and customizable prompts
@@ -11365,7 +12260,6 @@ You can start using Cody with one of the following options:
-
@@ -11508,6 +12402,14 @@ Cody does not support embeddings on Cody PLG and Cody Enterprise because we have
Leveraging Sourcegraph Search allowed us to deliver these enhancements.
+## LLM Data Sharing and Retention
+
+### Is any of my data sent to DeepSeek?
+
+Our autocomplete features uses the open source DeepSeek-Coder-V2 model, which is hosted by Fireworks.ai in a secure single-tenant environment located in the USA. No customer chat or autocomplete data - such as chat messages, or context such as code snippets or configuration - is stored by Fireworks.ai.
+
+Sourcegraph does not use models hosted by DeepSeek (the company), and does not send any data to the same.
+
## Third party dependencies
### What is the default `sourcegraph` provider for completions?
@@ -11573,8 +12475,6 @@ Once you are signed in make sure to re-enable protection.
#### Model Selection
-- **o1-preview**: Best for complex reasoning and planning
-- **o1-mini**: More reliable for straightforward tasks
- **Sonnet 3.5**: Better for tasks requiring longer outputs
#### Prompting Strategy
@@ -11587,7 +12487,6 @@ Once you are signed in make sure to re-enable protection.
#### Response Time
- Start with smaller contexts
-- Use o1-mini for faster responses
- Consider breaking complex tasks into stages
#### Quality
@@ -14094,6 +14993,41 @@ In the example above:
- Sourcegraph-provided models are used for `"chat"` and `"fastChat"` (accessed via Cody Gateway)
- The newly configured model, `"huggingface-codellama::v1::CodeLlama-7b-hf"`, is used for `"autocomplete"` (connecting directly to Hugging Face’s OpenAI-compatible API)
+#### Example configuration with Claude 3.7 Sonnet
+
+```json
+{
+ "modelRef": "anthropic::2024-10-22::claude-3-7-sonnet-latest",
+ "displayName": "Claude 3.7 Sonnet",
+ "modelName": "claude-3-7-sonnet-latest",
+ "capabilities": [
+ "chat",
+ "reasoning"
+ ],
+ "category": "accuracy",
+ "status": "stable",
+ "tier": "pro",
+ "contextWindow": {
+ "maxInputTokens": 45000,
+ "maxOutputTokens": 4000
+ },
+ "modelCost": {
+ "unit": "mtok",
+ "inputTokenPennies": 300,
+ "outputTokenPennies": 1500
+ },
+ "reasoningEffort": "high"
+},
+```
+
+In this modelOverrides config example:
+
+- The model is configured to use Claude 3.7 Sonnet with Cody Gateway
+- The model is configured to use the `"chat"` and `"reasoning"` capabilities
+- The `reasoningEffort` can be set to 3 different options in the Model Config. These options are `high`, `medium` and `low`
+- The default `reasoningEffort` is set to `low`
+- When the reasoning effort is `low`, 1024 tokens is used as the thinking budget. With `medium` and `high` the thinking budget is set via `max_tokens_to_sample/2`
+
Refer to the [examples page](/cody/enterprise/model-config-examples) for additional examples.
## View configuration
@@ -14153,18 +15087,6 @@ The response includes:
"maxOutputTokens": 4000
}
},
- {
- "modelRef": "anthropic::2023-06-01::claude-3-opus",
- "displayName": "Claude 3 Opus",
- "modelName": "claude-3-opus-20240229",
- "capabilities": ["chat"],
- "category": "other",
- "status": "stable",
- "contextWindow": {
- "maxInputTokens": 45000,
- "maxOutputTokens": 4000
- }
- },
{
"modelRef": "anthropic::2023-06-01::claude-3-haiku",
"displayName": "Claude 3 Haiku",
@@ -14213,30 +15135,6 @@ The response includes:
"maxOutputTokens": 4000
}
},
- {
- "modelRef": "google::v1::gemini-1.5-flash",
- "displayName": "Gemini 1.5 Flash",
- "modelName": "gemini-1.5-flash",
- "capabilities": ["chat"],
- "category": "speed",
- "status": "stable",
- "contextWindow": {
- "maxInputTokens": 45000,
- "maxOutputTokens": 4000
- }
- },
- {
- "modelRef": "mistral::v1::mixtral-8x7b-instruct",
- "displayName": "Mixtral 8x7B",
- "modelName": "accounts/fireworks/models/mixtral-8x7b-instruct",
- "capabilities": ["chat"],
- "category": "speed",
- "status": "stable",
- "contextWindow": {
- "maxInputTokens": 7000,
- "maxOutputTokens": 4000
- }
- },
{
"modelRef": "openai::2024-02-01::gpt-4o",
"displayName": "GPT-4o",
@@ -14261,18 +15159,6 @@ The response includes:
"maxOutputTokens": 4000
}
},
- {
- "modelRef": "openai::2024-02-01::cody-chat-preview-002",
- "displayName": "OpenAI o1-mini",
- "modelName": "cody-chat-preview-002",
- "capabilities": ["chat"],
- "category": "accuracy",
- "status": "waitlist",
- "contextWindow": {
- "maxInputTokens": 45000,
- "maxOutputTokens": 4000
- }
- }
],
"defaultModels": {
"chat": "anthropic::2024-10-22::claude-3-5-sonnet-latest",
@@ -14430,30 +15316,6 @@ Below are configuration examples for setting up various LLM providers using BYOK
"maxOutputTokens": 4000
}
},
- {
- "modelRef": "anthropic::2023-06-01::claude-3-haiku",
- "displayName": "Claude 3 Haiku",
- "modelName": "claude-3-haiku-20240307",
- "capabilities": ["chat"],
- "category": "speed",
- "status": "stable",
- "contextWindow": {
- "maxInputTokens": 7000,
- "maxOutputTokens": 4000
- }
- },
- {
- "modelRef": "anthropic::2023-06-01::claude-3-haiku",
- "displayName": "Claude 3 Haiku",
- "modelName": "claude-3-haiku-20240307",
- "capabilities": ["edit", "chat"],
- "category": "speed",
- "status": "stable",
- "contextWindow": {
- "maxInputTokens": 7000,
- "maxOutputTokens": 4000
- }
- }
],
"defaultModels": {
"chat": "anthropic::2024-10-22::claude-3.5-sonnet",
@@ -15110,7 +15972,7 @@ For the supported LLM model configuration listed above, refer to the following n
## Using Sourcegraph Cody Gateway
-This is the recommended way to configure Cody Enterprise. It supports all the latest models from Anthropic, OpenAI, Mistral, and more without requiring a separate account or incurring separate charges. You can learn more about these in our [supported models](/cody/capabilities/supported-models) docs.
+This is the recommended way to configure Cody Enterprise. It supports all the latest models from Anthropic, OpenAI, and more without requiring a separate account or incurring separate charges. You can learn more about these in our [supported models](/cody/capabilities/supported-models) docs.
## Using your organization's account with a model provider
@@ -15304,7 +16166,7 @@ Users of the Cody extensions will automatically pick up this change when connect
Learn about Cody's token limits and how to manage them.
-For all models, Cody allows up to **4,000 tokens of output**, which is approximately **500-600** lines of code. For Claude 3 Sonnet or Opus models, Cody tracks two separate token limits:
+For all models, Cody allows up to **4,000 tokens of output**, which is approximately **500-600** lines of code. For Claude 3 Sonnet models, Cody tracks two separate token limits:
- The @-mention context is limited to **30,000 tokens** (~4,000 lines of code) and can be specified using the @-filename syntax. The user explicitly defines this context, which provides specific information to Cody.
- Conversation context is limited to **15,000 tokens**, including user questions, system responses, and automatically retrieved context items. Apart from user questions, Cody generates this context automatically.
@@ -15315,61 +16177,56 @@ Here's a detailed breakdown of the token limits by model:
-| **Model** | **Conversation Context** | **@-mention Context** | **Output** |
-| --------------------------- | ------------------------ | --------------------- | ---------- |
-| gpt-3.5-turbo | 7,000 | shared | 4,000 |
-| gpt-4-turbo | 7,000 | shared | 4,000 |
-| gpt 4o | 7,000 | shared | 4,000 |
-| claude-2.0 | 7,000 | shared | 4,000 |
-| claude-2.1 | 7,000 | shared | 4,000 |
-| claude-3 Haiku | 7,000 | shared | 4,000 |
-| claude-3.5 Haiku | 7,000 | shared | 4,000 |
-| **claude-3 Sonnet** | **15,000** | **30,000** | **4,000** |
-| **claude-3.5 Sonnet** | **15,000** | **30,000** | **4,000** |
-| **claude-3.5 Sonnet (New)** | **15,000** | **30,000** | **4,000** |
-| mixtral 8x7B | 7,000 | shared | 4,000 |
-| mixtral 8x22B | 7,000 | shared | 4,000 |
-| Google Gemini 1.5 Flash | 7,000 | shared | 4,000 |
-| Google Gemini 1.5 Pro | 7,000 | shared | 4,000 |
+| **Model** | **Conversation Context** | **@-mention Context** | **Output** |
+| ----------------------------- | ------------------------ | --------------------- | ---------- |
+| GPT 4o mini | 7,000 | shared | 4,000 |
+| GPT o3 mini medium | 7,000 | shared | 4,000 |
+| Claude 3.5 Haiku | 7,000 | shared | 4,000 |
+| **Claude 3.5 Sonnet (New)** | **15,000** | **30,000** | **4,000** |
+| Gemini 1.5 Pro | 7,000 | shared | 4,000 |
+| Gemini 2.0 Flash | 7,000 | shared | 4,000 |
+| Gemini 2.0 Flash-Lite Preview | 7,000 | shared | 4,000 |
-| **Model** | **Conversation Context** | **@-mention Context** | **Output** |
-| --------------------------- | ------------------------ | --------------------- | ---------- |
-| gpt-3.5-turbo | 7,000 | shared | 4,000 |
-| gpt-4 | 7,000 | shared | 4,000 |
-| gpt-4-turbo | 7,000 | shared | 4,000 |
-| claude instant | 7,000 | shared | 4,000 |
-| claude-2.0 | 7,000 | shared | 4,000 |
-| claude-2.1 | 7,000 | shared | 4,000 |
-| claude-3 Haiku | 7,000 | shared | 4,000 |
-| claude-3.5 Haiku | 7,000 | shared | 4,000 |
-| **claude-3 Sonnet** | **15,000** | **30,000** | **4,000** |
-| **claude-3.5 Sonnet** | **15,000** | **30,000** | **4,000** |
-| **claude-3.5 Sonnet (New)** | **15,000** | **30,000** | **4,000** |
-| **claude-3 Opus** | **15,000** | **30,000** | **4,000** |
-| **Google Gemini 1.5 Flash** | **15,000** | **30,000** | **4,000** |
-| **Google Gemini 1.5 Pro** | **15,000** | **30,000** | **4,000** |
-| mixtral 8x7b | 7,000 | shared | 4,000 |
+
+The Pro tier supports the token limits for the LLM models on Free tier, plus:
+
+| **Model** | **Conversation Context** | **@-mention Context** | **Output** |
+| ----------------------------- | ------------------------ | --------------------- | ---------- |
+| GPT 4o mini | 7,000 | shared | 4,000 |
+| GPT o3 mini medium | 7,000 | shared | 4,000 |
+| GPT 4 Turbo | 7,000 | shared | 4,000 |
+| GPT 4o | 7,000 | shared | 4,000 |
+| o1 | 7,000 | shared | 4,000 |
+| Claude 3.5 Haiku | 7,000 | shared | 4,000 |
+| **Claude 3.5 Sonnet (New)** | **15,000** | **30,000** | **4,000** |
+| Claude 3.7 Sonnet | 15,000 | 30,000 | 4,000 |
+| Gemini 1.5 Pro | 15,000 | 30,000 | 4,000 |
+| Gemini 2.0 Flash | 7,000 | shared | 4,000 |
+| Gemini 2.0 Flash-Lite Preview | 7,000 | shared | 4,000 |
+
-| **Model** | **Conversation Context** | **@-mention Context** | **Output** |
-| --------------------------- | ------------------------ | --------------------- | ---------- |
-| gpt-3.5-turbo | 7,000 | shared | 1,000 |
-| gpt-4 | 7,000 | shared | 1,000 |
-| gpt-4-turbo | 7,000 | shared | 1,000 |
-| claude instant | 7,000 | shared | 1,000 |
-| claude-2.0 | 7,000 | shared | 1,000 |
-| claude-2.1 | 7,000 | shared | 1,000 |
-| claude-3 Haiku | 7,000 | shared | 1,000 |
-| claude-3.5 Haiku | 7,000 | shared | 1,000 |
-| **claude-3 Sonnet** | **15,000** | **30,000** | **4,000** |
-| **claude-3.5 Sonnet** | **15,000** | **30,000** | **4,000** |
-| **claude-3.5 Sonnet (New)** | **15,000** | **30,000** | **4,000** |
-| **claude-3 Opus** | **15,000** | **30,000** | **4,000** |
-| mixtral 8x7b | 7,000 | shared | 1,000 |
+
+The Enterprise tier supports the token limits for the LLM models on Free and Pro tier, plus:
+
+| **Model** | **Conversation Context** | **@-mention Context** | **Output** |
+| ----------------------------- | ------------------------ | --------------------- | ---------- |
+| GPT 4o mini | 7,000 | shared | 4,000 |
+| GPT o3 mini medium | 7,000 | shared | 4,000 |
+| GPT 4 Turbo | 7,000 | shared | 4,000 |
+| GPT 4o | 7,000 | shared | 4,000 |
+| o1 | 7,000 | shared | 4,000 |
+| o3 mini high | 7,000 | shared | 4,000 |
+| Claude 3.5 Haiku | 7,000 | shared | 4,000 |
+| **Claude 3.5 Sonnet (New)** | **15,000** | **30,000** | **4,000** |
+| Claude 3.7 Sonnet | 15,000 | 30,000 | 4,000 |
+| Gemini 2.0 Flash | 7,000 | shared | 4,000 |
+| Gemini 2.0 Flash-Lite Preview | 7,000 | shared | 4,000 |
+
@@ -15391,26 +16248,6 @@ When a model generates text or code, it does so token by token, predicting the m
The output limit helps to keep the generated text focused, concise, and manageable by preventing the model from going off-topic or generating excessively long responses. It also ensures that the output can be efficiently processed and displayed by downstream applications or user interfaces while managing computational resources.
-## Current foundation model limits
-
-Here is a table with the context window sizes and output limits for each of our [supported models](/cody/capabilities/supported-models).
-
-| **Model** | **Context Window** | **Output Limit** |
-| ---------------- | ------------------ | ---------------- |
-| gpt-3.5-turbo | 16,385 tokens | 4,096 tokens |
-| gpt-4 | 8,192 tokens | 4,096 tokens |
-| gpt-4-turbo | 128,000 tokens | 4,096 tokens |
-| claude instant | 100,000 tokens | 4,096 tokens |
-| claude-2.0 | 100,000 tokens | 4,096 tokens |
-| claude-2.1 | 200,000 tokens | 4,096 tokens |
-| claude-3 Haiku | 200,000 tokens | 4,096 tokens |
-| claude-3.5 Haiku | 200,000 tokens | 4,096 tokens |
-| claude-3 Sonnet | 200,000 tokens | 4,096 tokens |
-| claude-3 Opus | 200,000 tokens | 4,096 tokens |
-| mixtral 8x7b | 32,000 tokens | 4,096 tokens |
-
-These foundation model limits are the LLM models' inherent limits. For instance, Claude 3 models have a 200K context window compared to 8,192 for GPT-4.
-
## Tradeoffs: Size, Accuracy, Latency and Cost
So why does Cody not use each model's full available context window? We need to consider a few tradeoffs, namely, context size, retrieval accuracy, latency, and costs.
@@ -15611,7 +16448,7 @@ Cody Gateway powers the default `"provider": "sourcegraph"` and Cody completions
## Supported Models
-For a full list of supported models and providers, read our [Supported LLMs](/Cody/capabilities/supported-models) docs.
+For a full list of supported models and providers, read our [Supported LLMs](/cody/capabilities/supported-models) docs.
## Setting up Cody Gateway in Sourcegraph Enterprise
@@ -15815,7 +16652,7 @@ For Edit:
- On any file, select some code and a right-click
- Select Cody->Edit Code (optionally, you can do this with Opt+K/Alt+K)
-- Select the default model available (this is Claude 3 Opus)
+- Select the default model available
- See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits
### Selecting Context with @-mentions
@@ -15950,70 +16787,16 @@ Claude 3.5 Sonnet is the default LLM model for inline edits and prompts. If you'
Users on Cody **Free** and **Pro** can choose from a list of [supported LLM models](/cody/capabilities/supported-models) for chat.
-
+
-Enterprise users get Claude 3 (Opus and Sonnet) as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.
+Enterprise users get Claude 3.5 Sonnet as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.
For enterprise users on Amazon Bedrock: 3.5 Sonnet is unavailable in `us-west-2` but available in `us-east-1`. Check the current model availability on AWS and your customer's instance location before switching. Provisioned throughput via AWS is not supported for 3.5 Sonnet.
-You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) Anthropic (like Claude 2.0 and Claude Instant), OpenAI (GPT 3.5 and GPT 4) and Google Gemini 1.5 models (Flash and Pro).
+You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) from Anthropic, OpenAI, and Google Gemini.
Read more about all the supported LLM models [here](/cody/capabilities/supported-models)
-## Supported local Ollama models with Cody
-
-Support with Ollama is currently in the Experimental stage and is available for Cody Free and Pro plans.
-
-### Cody Autocomplete with Ollama
-
-To get autocomplete suggestions from Ollama locally, follow these steps:
-
-- Install and run [Ollama](https://ollama.ai/)
-- Download one of the supported local models using `pull`. The `pull` command is used to download models from the Ollama library to your local machine.
- - `ollama pull deepseek-coder-v2` for [deepseek-coder](https://ollama.com/library/deepseek-coder-v2)
- - `ollama pull codellama:13b` for [codellama](https://ollama.ai/library/codellama)
- - `ollama pull starcoder2:7b` for [starcoder2](https://ollama.ai/library/starcoder2)
-- Update Cody's VS Code settings to use the `experimental-ollama` autocomplete provider and configure the right model:
-
-```json
-"cody.autocomplete.advanced.provider": "experimental-ollama",
-"cody.autocomplete.experimental.ollamaOptions": {
- "url": "http://localhost:11434",
- "model": "deepseek-coder-v2"
-}
-```
-
-- Confirm Cody uses Ollama by looking at the Cody output channel or the autocomplete trace view (in the command palette)
-
-### Cody chat with Ollama
-
-
-
-To generate chat with Ollama locally, follow these steps:
-
-- Download [Ollama](https://ollama.com/download)
-- Start Ollama (make sure the Ollama logo is showing up in your menu bar)
-- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library)
-- Pull (download) the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
-- Once the chat model is downloaded successfully, open Cody in VS Code
-- Open a new Cody chat
-- In the new chat panel, you should see the chat model you've pulled in the dropdown list
-- Currently, you will need to restart VS Code to see the new models
-
-You can run `ollama list` in your terminal to see what models are currently available on your machine.
-
-#### Run Cody offline with local Ollama models
-
-You can use Cody with or without an internet connection. The offline mode does not require you to sign in with your Sourcegraph account to use Ollama. Click the button below the Ollama logo and you'll be ready to go.
-
-
-
-You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
-
## Experimental models
Support for the following models is currently in the Experimental stage, and available for Cody Free and Pro plans.
@@ -16153,9 +16936,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva
## LLM selection
-Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
-
-Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, download it and run it in Ollama.
+Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.
You can read more about it in our [Supported LLM models docs](/cody/capabilities/supported-models).
@@ -16189,6 +16970,16 @@ To help you get started, there are a few prompts that are available by default.

+## Autocomplete
+
+Cody for Visual Studio supports single and multi-line autocompletions. The autocomplete feature is available in the Cody extension starting in `v0.2.0` and requires a Visual Studios version of 17.8+ and above. It's enabled by default, with settings to turn it off.
+
+
+
+Advanced features like [auto-edit](/cody/capabilities/auto-edit) are not yet supported. To disable the autocomplete feature, you can do it from your Cody settings section.
+
@@ -16559,12 +17350,6 @@ Enterprise users who have [model configuration](/Cody/clients/model-configuratio
Read and learn more about the [supported LLMs](/cody/capabilities/supported-models) and [token limits](/cody/core-concepts/token-limits) on Cody Free, Pro and Enterprise.
-## Ollama model support
-
-Ollama support for JetBrains is in the Experimental stage and is available for Cody Free and Pro Plans.
-
-You can use Ollama models locally for Cody’s chat. This lets you chat without sending messages over the internet to an LLM provider so that you can use Cody offline. To use Ollama locally, you’ll need to install Ollama and download a chat model such as CodeGemma or Llama3. [Read here for detailed instructions](https://sourcegraph.com/github.com/sourcegraph/jetbrains/-/blob/README.md#use-ollama-models-for-chat--commands).
-
## Add/remove account
To add or remove an account, you can do the following:
@@ -16641,9 +17426,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva
## LLM selection
-Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
-
-Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, simply download it and run it in Ollama.
+Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.
You can read more about it in our [Supported LLM models docs](/cody/capabilities/supported-models).
@@ -16677,17 +17460,15 @@ While Cody for Eclipse is currently in the experimental stage, we are open to fe
Learn how to install the cody command-line tool and using the cody chat subcommand.
-
-Cody CLI support is in the experimental stage.
+
+Cody CLI support is in the Experimental stage for Enterprise accounts.
Cody CLI is the same technology that powers the Cody IDE plugins but available from the command-line.
Use Cody CLI for ad-hoc exploration in your terminal or as part of scripts to automate your workflows.
-Cody CLI is available to Free, Pro, and Enterprise customers.
-
-
+
## Prerequisites
@@ -16766,8 +17547,9 @@ This will open a browser window where you can authenticate with your Sourcegraph
Close the browser tab after authentication is complete.
-- For Cody Pro/Free accounts, create an access token at https://sourcegraph.com/user/settings/tokens.
-- For Cody Enterprise accounts, sign into your Sourcegraph Enterprise account and create an access token under `Account > Settings > Access Tokens`.
+
+- Cody Enterprise accounts can sign into their Sourcegraph Enterprise account and create an access token under `Account > Settings > Access Tokens`.
+-
```shell
export SRC_ENDPOINT=ENDPOINT
export SRC_ACCESS_TOKEN=ACCESS_TOKEN
@@ -16789,7 +17571,6 @@ cody auth whoami
**Skip this step if you have already authenticated with the `cody auth login` command.**
-
If you prefer not to let Cody CLI store your access token, you can also pass the endpoint URL and access token through the environment variables `SRC_ENDPOINT` and `SRC_ACCESS_TOKEN`.
@@ -16807,7 +17588,7 @@ $env:SRC_ACCESS_TOKEN = "ACCESS_TOKEN"
-It's recommended to store these access tokens in a secure location.
+It's recommended to store these access tokens in a secure location.
For example, you can store them with a password manager like [1Password](https://1password.com/) or [Bitwarden](https://bitwarden.com/).
It is not recommended to export these variables in your shell startup script because it will expose your access token to all commands you run from the terminal. Instead, consider sourcing these environment variables on-demand when you need to authenticate with the Cody CLI.
@@ -16885,9 +17666,6 @@ Use the `-` trailing argument as an alternative to `--stdin` to read the diff fr
git diff | cody chat -m 'Write a commit message for this diff' -
```
-
-
-
@@ -16899,7 +17677,6 @@ git diff | cody chat -m 'Write a commit message for this diff' -
-
@@ -16914,35 +17691,33 @@ git diff | cody chat -m 'Write a commit message for this diff' -
## Chat
-| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** | **Eclipse** | **Web** | **CLI** |
-| ---------------------------------------- | ----------- | ------------- | ----------------- | ----------- | -------------------- | ------- |
-| Chat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| Chat history | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| Clear chat history | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| Edit sent messages | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| SmartApply/Execute | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Show context files | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| @-file | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| @-symbol | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ |
-| Ollama support (experimental) | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
-| LLM Selection | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| **Context Selection** | | | | | | |
-| Single-repo context | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| Multi-repo context | ❌ | ❌ | ❌ | ❌ | ✅ (public code only) | ❌ |
-| Local context | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
-| OpenCtx context providers (experimental) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| **Prompts** | | | | | | |
-| Access to prompts and Prompt library | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
-| Promoted Prompts | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ |
+| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** | **Web** | **CLI** |
+| ---------------------------------------- | ----------- | ------------- | ----------------- | -------------------- | ------- |
+| Chat | ✅ | ✅ | ✅ | ✅ | ✅ |
+| Chat history | ✅ | ✅ | ✅ | ✅ | ❌ |
+| Clear chat history | ✅ | ✅ | ✅ | ✅ | ❌ |
+| Edit sent messages | ✅ | ✅ | ✅ | ✅ | ❌ |
+| SmartApply/Execute | ✅ | ❌ | ❌ | ❌ | ❌ |
+| Show context files | ✅ | ✅ | ✅ | ✅ | ❌ |
+| @-file | ✅ | ✅ | ✅ | ✅ | ❌ |
+| @-symbol | ✅ | ❌ | ✅ | ✅ | ❌ |
+| LLM Selection | ✅ | ✅ | ✅ | ✅ | ❌ |
+| **Context Selection** | | | | | |
+| Single-repo context | ✅ | ✅ | ✅ | ✅ | ❌ |
+| Multi-repo context | ❌ | ❌ | ❌ | ✅ (public code only) | ❌ |
+| Local context | ✅ | ✅ | ✅ | ❌ | ✅ |
+| OpenCtx context providers (experimental) | ✅ | ❌ | ❌ | ❌ | ❌ |
+| **Prompts** | | | | | |
+| Access to prompts and Prompt library | ✅ | ✅ | ✅ | ✅ | ❌ |
+| Promoted Prompts | ✅ | ❌ | ❌ | ✅ | ❌ |
## Code Autocomplete
-| **Feature** | **VS Code** | **JetBrains** |
-| --------------------------------------------- | ----------- | ------------- |
-| Single and multi-line autocompletion | ✅ | ✅ |
-| Cycle through multiple completion suggestions | ✅ | ✅ |
-| Accept suggestions word-by-word | ✅ | ❌ |
-| Ollama support (experimental) | ✅ | ❌ |
+| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** |
+| --------------------------------------------- | ----------- | ------------- | ----------------- |
+| Single and multi-line autocompletion | ✅ | ✅ | ✅ |
+| Cycle through multiple completion suggestions | ✅ | ✅ | ✅ |
+| Accept suggestions word-by-word | ✅ | ❌ | ❌ |
Few exceptions that apply to Cody Pro and Cody Enterprise users:
@@ -17090,7 +17865,7 @@ There are two ways of configuring Cody for LLM providers:
Learn how to use Cody in the web interface with your Sourcegraph.com instance.
-In addition to the Cody extensions for [VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), [Visual Studio](/cody/clients/install-visual-studio ), and [Eclispe](/cody/clients/install-eclipse) IDEs, Cody is also available in the Sourcegraph web app. Community users can use Cody for free by logging into their accounts on Sourcegraph.com, and enterprise users can use Cody within their Sourcegraph instance.
+In addition to the Cody extensions for [VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), and [Visual Studio](/cody/clients/install-visual-studio ) IDEs, Cody is also available in the Sourcegraph web app. Community users can use Cody for free by logging into their accounts on Sourcegraph.com, and enterprise users can use Cody within their Sourcegraph instance.
@@ -17163,25 +17938,20 @@ Cody supports a variety of cutting-edge large language models for use in chat an
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
| :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :----------- | :------------- | --- | --- | --- | --- |
-| OpenAI | [gpt-3.5 turbo](https://platform.openai.com/docs/models/gpt-3-5-turbo) | ✅ | ✅ | ✅ | | | | |
-| OpenAI | [gpt-4](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=to%20Apr%202023-,gpt%2D4,-Currently%20points%20to) | - | - | ✅ | | | | |
-| OpenAI | [gpt-4 turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - | ✅ | ✅ | | | | |
-| OpenAI | [gpt-4o](https://platform.openai.com/docs/models/gpt-4o) | - | ✅ | ✅ | | | | |
-| Anthropic | [claude-3 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
-| Anthropic | [claude-3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
-| Anthropic | [claude-3 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
-| Anthropic | [claude-3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
-| Anthropic | [claude-3.5 Sonnet (New)](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
-| Anthropic | [claude-3 Opus](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | ✅ | ✅ | | | | |
-| Mistral | [mixtral 8x7b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
-| Mistral | [mixtral 8x22b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
-| Ollama | [variety](https://ollama.com/) | experimental | experimental | - | | | | |
-| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | |
-| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | |
-| Google Gemini | [2.0 Flash Experimental](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | |
-| | | | | | | | | |
-
-To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.
+| OpenAI | [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - | ✅ | ✅ | | | | |
+| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | - | ✅ | ✅ | | | | |
+| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ | ✅ | ✅ | | | | |
+| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) | ✅ | ✅ | ✅ | | | | |
+| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - | ✅ | | | | |
+| OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - | ✅ | ✅ | | | | |
+| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
+| Anthropic | [Claude 3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
+| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | ✅ | ✅ | | | | |
+| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (beta) | | | | |
+| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | |
+| Google | [Gemini 2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) | ✅ | ✅ | ✅ | | | | |
+
+To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version. Claude 3.7 Sonnet with thinking is not supported for BYOK deployments.
## Autocomplete
@@ -17192,13 +17962,11 @@ Cody uses a set of models for autocomplete which are suited for the low latency
| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | ✅ | ✅ | ✅ | | | | |
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - | ✅ | | | | |
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | |
-| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | |
-| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - | | | | |
| | | | | | | | | |
The default autocomplete model for Cody Free, Pro and Enterprise users is DeepSeek-Coder-V2.
-Read here for [Ollama setup instructions](https://sourcegraph.com/docs/cody/clients/install-vscode#supported-local-ollama-models-with-cody). For information on context token limits, see our [documentation here](/cody/core-concepts/token-limits).
+The DeepSeek model used by Sourcegraph is hosted by Fireworks.ai, and is hosted as a single-tenant service in a US-based data center. For more information see our [Cody FAQ](https://sourcegraph.com/docs/cody/faq#is-any-of-my-data-sent-to-deepseek).
@@ -17490,7 +18258,7 @@ Click the **New prompt** button from the **Prompt Library** page.
There are also a few advanced options that you can configure.
-
+
### Draft prompts
@@ -17535,6 +18303,24 @@ Once site admins create tags, other users in your organization can assign tags t
You can assign multiple tags to a prompt and group them based on their functionality, category, or any other criteria for your organization. In addition, with tags assigned to prompts, you can filter prompts by tags in the Prompt Library.
+## Specific and dynamic context
+
+Sourcegraph 6.0 adds Beta support for the `@` mention menu in the prompt library.
+
+When writing prompts, you can leverage both specific and dynamic context through the `@` mention system.
+
+
+
+Type `@` to open a dropdown menu that lets you reference specific context like symbols, directories, files, repositories and web URLs.
+
+When selecting a web URL, type out the domain, including the `https://` prefix, for example, https://sourcegraph.com.
+
+For dynamic context that adapts based on what the user is working on, the prompt editor provides special mentions for the current selection, current file, current repository, current directory, and open tabs.
+When a user runs a prompt template containing dynamic context mentions, they are automatically resolved to the appropriate specific context based on the user's current workspace state.
+To add dynamic context, click on one of the buttons below the prompt editor. We will soon move the buttons into the `@` mention menu as well.
+
+This powerful combination allows prompt authors to create templates that can intelligently access both explicitly defined context and contextually relevant information at runtime.
+
## Run prompts
You can run prompts via:
@@ -17604,6 +18390,66 @@ Please refer to the [context providers docs](https://openctx.org/) for instructi
+
+# Supported local Ollama models with Cody
+
+{/* Internal docs only. Offloading from our production docs for now. */}
+
+
+Support with Ollama is currently in the Experimental stage and is available for Cody Free and Pro plans.
+
+## Cody Autocomplete with Ollama
+
+To get autocomplete suggestions from Ollama locally, follow these steps:
+
+- Install and run [Ollama](https://ollama.ai/)
+- Download one of the supported local models using `pull`. The `pull` command is used to download models from the Ollama library to your local machine.
+ - `ollama pull deepseek-coder-v2` for [deepseek-coder](https://ollama.com/library/deepseek-coder-v2)
+ - `ollama pull codellama:13b` for [codellama](https://ollama.ai/library/codellama)
+ - `ollama pull starcoder2:7b` for [starcoder2](https://ollama.ai/library/starcoder2)
+- Update Cody's VS Code settings to use the `experimental-ollama` autocomplete provider and configure the right model:
+
+```json
+"cody.autocomplete.advanced.provider": "experimental-ollama",
+"cody.autocomplete.experimental.ollamaOptions": {
+ "url": "http://localhost:11434",
+ "model": "deepseek-coder-v2"
+}
+```
+
+- Confirm Cody uses Ollama by looking at the Cody output channel or the autocomplete trace view (in the command palette)
+
+## Cody chat with Ollama
+
+
+
+To generate chat with Ollama locally, follow these steps:
+
+- Download [Ollama](https://ollama.com/download)
+- Start Ollama (make sure the Ollama logo is showing up in your menu bar)
+- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library)
+- Pull (download) the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
+- Once the chat model is downloaded successfully, open Cody in VS Code
+- Open a new Cody chat
+- In the new chat panel, you should see the chat model you've pulled in the dropdown list
+- Currently, you will need to restart VS Code to see the new models
+
+You can run `ollama list` in your terminal to see what models are currently available on your machine.
+
+### Run Cody offline with local Ollama models
+
+You can use Cody with or without an internet connection. The offline mode does not require you to sign in with your Sourcegraph account to use Ollama. Click the button below the Ollama logo and you'll be ready to go.
+
+
+
+You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.
+
+
+
# Cody Capabilities
@@ -17801,7 +18647,7 @@ You can detect code smells by the **find-code-smells** prompt from the Prompts d
## Code Actions
-Code Actions are available onlyin Cody VS Code extension.
+Code Actions are available only in Cody VS Code extension.
When you make a mistake while writing code, Cody's **Code Actions** come into play and a red warning triggers. Along with this, you get a lightbulb icon. If you click on this lightbulb icon, there is an **Ask Cody to fix** option.
@@ -17822,7 +18668,7 @@ When you make a mistake while writing code, Cody's **Code Actions** come into pl
You can **chat** with Cody to ask questions about your code, generate code, and edit code. By default, Cody has the context of your open file and entire repository, and you can use `@` to add context for specific files, symbols, remote repositories, or other non-code artifacts.
-You can do it from the **chat** panel of the supported editor extensions ([VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), [Visual Studio](/cody/clients/install-visual-studio), [Eclispe](/cody/clients/install-eclipse)) or in the [web](/cody/clients/cody-with-sourcegraph) app.
+You can do it from the **chat** panel of the supported editor extensions ([VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), [Visual Studio](/cody/clients/install-visual-studio)) or in the [web](/cody/clients/cody-with-sourcegraph) app.
## Prerequisites
@@ -17874,16 +18720,6 @@ You can add new custom context by adding `@-mention` context chips to the chat.
You can use `@-mention` web URLs to pull live information like docs. You can connect Cody to OpenCtx to `@-mention` non-code artifacts like Google Docs, Notion pages, Jira tickets, and Linear issues.
-## Run offline
-
-Support with Ollama is currently in the Experimental stage and is available for Cody Free and Pro plans.
-
-Cody chat can run offline with Ollama. The offline mode does not require you to sign in with your Sourcegraph account to use Ollama. Click the button below the Ollama logo, and you'll be ready to go.
-
-
-
-You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
-
## LLM selection
Cody allows you to select the LLM you want to use for your chat, which is optimized for speed versus accuracy. Cody Free and Pro users can select multiple models. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.
@@ -17939,29 +18775,6 @@ To use Cody's chat, you'll need the following:
The enhanced chat experience includes everything in the Free plan, plus the following:
-## Intent detection
-
-Intent detection automatically analyzes user queries and determines whether to provide an AI chat or code search response. This functionality helps simplify developer workflows by providing the most appropriate type of response without requiring explicit mode switching.
-
-### How it works
-
-When a user submits a query in the chat panel, the intent detection component:
-
-- Analyzes the query content and structure
-- Determines the most appropriate response type (search or chat)
-- Returns results in the optimal format
-- Provides the ability to toggle between response types manually
-
-Let's look at an example of how this might work:
-
-#### Search-based response
-
-
-
-#### Chat-based response
-
-
-
## Smart search integration
The smart search integration enhances Sourcegraph's chat experience by providing lightweight code search capabilities directly within the chat interface. This feature simplifies developer workflows by offering quick access to code search without leaving the chat environment.
@@ -17997,15 +18810,16 @@ Search results generated through smart search integration can be automatically u
The following is a general walkthrough of the chat experience:
1. The user enters a query in the chat interface
-2. The system analyzes the query through intent detection
-3. If it's a search query:
+2. By default a user gets a chat response for the query
+3. To get integrated search results, toggle to **Run as search** from the drop-down selector or alternatively use `Cmd+Opt+Enter` (macOS)
+4. For search:
- Displays ranked results with code snippets
- Shows personalized repository ordering
- Provides checkboxes to select context for follow-ups
-4. If it's a chat query:
+5. For chat:
- Delivers AI-powered responses
- Can incorporate previous search results as context
-5. Users can:
+6. Users can:
- Switch between search and chat modes
- Click on results to open files in their editor
- Ask follow-up questions using selected context
@@ -18038,7 +18852,7 @@ The autocompletion model is designed to enhance speed, accuracy, and the overall
First, you'll need the following setup:
- A Free or Pro account via Sourcegraph.com or a Sourcegraph Enterprise instance
-- A supported editor extension (VS Code, JetBrains, Visual Studio, Eclipse)
+- A supported editor extension (VS Code, JetBrains, Visual Studio)
The autocomplete feature is enabled by default on all IDE extensions, i.e., VS Code and JetBrains. Generally, there's a checkbox in the extension settings that confirms whether the autocomplete feature is enabled or not. In addition, some autocomplete settings are optionally and explicitly supported by some IDEs. For example, JetBrains IDEs have settings that allow you to customize colors and styles of the autocomplete suggestions.
@@ -18098,6 +18912,11 @@ Auto-edit is available for Enterprise customers with [Sourcegraph Cody Gateway](
- Enable the feature flag `cody-autoedit-experiment-enabled-flag`
- Add `fireworks::*` as an [allowed provider](https://sourcegraph.com/docs/cody/enterprise/model-configuration#model-filters) (see below)
2. Once enabled, developers will receive a notification in their editor to turn it on
+3. If you skip the notification or don't see it, you can manually enable/disable it from the Cody extension settings.
+
+
+
+There is a drop-down menu for Cody's suggestions mode. You can choose between the default autocomplete and the auto-edit mode, or you can turn off the suggestions completely.
The following example demonstrates how to add Fireworks as an allowed LLM provider:
@@ -18110,7 +18929,7 @@ The following example demonstrates how to add Fireworks as an allowed LLM provid
// Only allow "beta" and "stable" models.
// Not "experimental" or "deprecated".
"statusFilter": ["beta", "stable"],
-
+
// Allow any models provided by Anthropic, OpenAI, Google and Fireworks.
"allow": [
"anthropic::*", // Anthropic models
@@ -18159,7 +18978,7 @@ The auto-edit feature can help you with various repetitive tasks in your code:
Learn about the agentic chat experience, an exclusive chat-based AI agent with enhanced capabilities.
-Agentic chat is currently in the Experimental stage for Cody Pro and Enterprise and is supported on VS Code, JetBrains, Visual Studio editor extensions and Web. Usage may be limited at this stage.
+Agentic chat (available in version 6.0) is currently in the Experimental stage for Cody Pro and Enterprise and is supported on VS Code, JetBrains, Visual Studio editor extensions and Web. Usage may be limited at this stage.
Cody's agentic chat experience is an AI agent that can evaluate context and fetch any additional context (OpenCtx, terminal, etc.) by providing enhanced, context-aware chat capabilities. It extends Cody's functionality by proactively understanding your coding environment and gathering relevant information based on your requests before responding. These features help you get noticeably higher-quality responses.
@@ -18174,21 +18993,6 @@ The agentic chat experience leverages several key capabilities, including:
- **Iterative context improvement**: Performs multiple review loops to refine the context and ensure a thorough understanding
- **Enhanced response accuracy**: Leverages comprehensive context to provide more accurate and relevant responses, reducing the risk of hallucinations
-## Enable agentic chat
-
-Pro users can find the agentic chat option in the LLM selector drop-down. Enterprise customers must opt-in to access this agentic chat feature.
-
-
-
-### Getting agentic chat access for Enterprise customers
-
-For the experimental release, agentic chat is specifically limited to using Claude 3.5 Haiku for the reflection steps and Claude 3.5 Sonnet for the final response to provide a good balance between quality and latency. Therefore, your enterprise instance must have access to both Claude 3.5 Sonnet and Claude 3.5 Haiku to use agentic chat. These models may be changed during the experimental phase to optimize for quality and/or latency.
-
-Additionally, enterprise users need to upgrade their supported client (VS Code, JetBrains, and Visual Studio) to the latest version of the plugin by enabling the following feature flags on their Sourcegraph Instance:
-
-- `agentic-chat-experimental` to get access to the feature
-- `agentic-chat-cli-tool-experimental` to allow [terminal access](#terminal-commands)
-
## What can agentic chat do?
Agentic chat can help you with the following:
@@ -18205,7 +19009,6 @@ It has access to a suite of tools for retrieving relevant context. These tools i
It integrates seamlessly with external services, such as web content retrieval and issue tracking systems, using OpenCtx providers. To learn more, [read the OpenCtx docs](/cody/capabilities/openctx).
-
Terminal access is not supported on the Web. It currently only works with VS Code, JetBrains, and Visual Studio editor extensions.
## Terminal access
@@ -18226,11 +19029,24 @@ Agentic chat can be helpful to assist you with a wide range of tasks, including:
- **Error resolution**: It can automatically identify error sources and suggest fixes by analyzing error logs
- **Better unit tests**: Automatically includes imports and other missing contexts to generate better unit tests
-## Known limitations
+## Enable agentic chat
+
+### Getting agentic chat access for Pro users
+
+Pro users can find the agentic chat option in the LLM selector drop-down.
-### Enterprise deployments
+
+
+### Getting agentic chat access for Enterprise customers
+
+Enterprise customers must opt-in to access this agentic chat feature (reach out to your account team for access).
+
+For the experimental release, agentic chat is specifically limited to using Claude 3.5 Haiku for the reflection steps and Claude 3.5 Sonnet for the final response to provide a good balance between quality and latency. Therefore, your enterprise instance must have access to both Claude 3.5 Sonnet and Claude 3.5 Haiku to use agentic chat. These models may be changed during the experimental phase to optimize for quality and/or latency.
+
+Additionally, enterprise users need to upgrade their supported client (VS Code, JetBrains, and Visual Studio) to the latest version of the plugin by enabling the following feature flags on their Sourcegraph Instance:
-All customers are required to have Claude 3.5 Sonnet and Claude 3.5 Haiku enabled on their Sourcegraph instance (this requires Sourcegraph v5.9 and new [model configuration](/cody/enterprise/model-configuration)).
+- `agentic-chat-experimental` to get access to the feature
+- `agentic-chat-cli-tool-experimental` to allow [terminal access](#terminal-commands)
@@ -23036,7 +23852,7 @@ Precise code navigation relies on the open source [SCIP Code Intelligence Protoc
## Setting up code navigation for your codebase
-There are several options for setting up precise code navigation:
+There are several options for setting up precise code navigation listed below. However, we always recommend you start by manually indexing your repo locally using the [approriate indexer](/code-navigation/writing_an_indexer#quick-reference) for your language. Code and build systems can vary by project and ensuring you can first succesfully run the indexer locally leads to a smoother experience since it is vastly easier to debug and iterate on any issues locally before trying to do so in CI/CD or in Auto-Indexing.
1. **Manual indexing**. Index a repository and upload it to your Sourcegraph instance:
@@ -25420,7 +26236,7 @@ For unsupported private connectivity methods, Sourcegraph offers connectivity vi
### Health monitoring, support, and SLAs
- Instance performance and health [monitored](/admin/observability/) by our team's on-call engineers.
-- [Support and SLAs](https://handbook.sourcegraph.com/support#for-customers-with-managed-instances).
+- [Support and SLAs](../sla/index.mdx).
### Backup and restore
@@ -25498,7 +26314,7 @@ Supported destinations:
- The Sourcegraph instance can only be accessible via a public IP. Running it in a private network and pairing it with your private network via site-to-site VPN or VPC Peering is not yet supported.
- Code hosts or user authentication providers running in a private network are not yet supported. They have to be publicly available or they must allow incoming traffic from Sourcegraph-owned static IP addresses. We do not have proper support for other connectivity methods, e.g. site-to-site VPN, VPC peering, tunneling.
- Instances currently run only on Google Cloud Platform in the [chosen regions](#multiple-region-availability). Other regions and cloud providers (such as AWS or Azure) are not yet supported.
-- Some [configuration options](/admin/config/) are managed by Sourcegraph and cannot be overridden by customers, e.g. feature flags, experimental features.
+- Some [configuration options](/admin/config/) are managed by Sourcegraph and cannot be overridden by customers, e.g. feature flags, experimental features, and auto-indexing policy. Please reach out to your account team if you would like to make changes to these settings.
## Security
@@ -25506,7 +26322,7 @@ Your managed instance will be accessible over HTTPS/TLS, provide storage volumes
For all managed instances, we will provide security capabilities from Cloudflare such as WAF and rate-limiting to protect your instance from malicious traffic.
-Your instance will be hosted in isolated Google Cloud infrastructure. See our [employee handbook](https://handbook.sourcegraph.com/departments/cloud/technical-docs/) to learn more about the cloud architecture we use. Both your team and limited Sourcegraph personnel will have application-level administrator access to the instance.
+Your instance will be hosted in isolated Google Cloud infrastructure. See our [FAQ](#faq) below to learn more about the cloud architecture. Both your team and limited Sourcegraph personnel will have application-level administrator access to the instance.
Only essential Sourcegraph personnel will have access to the instance, server, code, and any other sensitive materials, such as tokens or keys. The employees or contractors with access are bound by the same terms as Sourcegraph itself. Learn more in our [security policies for Sourcegraph Cloud](https://about.sourcegraph.com/security) or [contact us](https://about.sourcegraph.com/contact/sales) with any questions or concerns. You may also request a copy of our SOC 2 Report on our [security portal](https://security.sourcegraph.com).
@@ -31401,11 +32217,11 @@ If the repository containing the workspaces is really large and it's not feasibl
Learn in detail about how to create, view, and filter your Batch Changes.
-Batch changes are created by writing a [batch spec](/batch-changes/batch-spec-yaml-reference) and executing that batch spec with the [Sourcegraph CLI](https://github.com/sourcegraph/src-cli) `src`.
+Batch Changes are created by writing a [batch spec](/batch-changes/batch-spec-yaml-reference) and executing that batch spec with the [Sourcegraph CLI](https://github.com/sourcegraph/src-cli) `src`.
-Batch changes can also be used on [multiple projects within a monorepo](/batch-changes/creating-changesets-per-project-in-monorepos) by using the `workspaces` key in your batch spec.
+Batch Changes can also be used on [multiple projects within a monorepo](/batch-changes/creating-changesets-per-project-in-monorepos) by using the `workspaces` key in your batch spec.
-There are two ways of creating a Batch Change:
+There are two ways of creating a batch change:
1. On your local machine, with the [Sourcegraph CLI](#create-a-batch-change-with-the-sourcegraph-cli)
2. Remotely, with [server-side execution](/batch-changes/server-side)
@@ -31422,7 +32238,7 @@ This part of the guide will walk you through creating a batch change on your loc
### Writing a batch spec
-To create a Batch Change, you need a **batch spec** describing the change. Here is an example batch spec that describes a batch change to add **Hello World** to all `README` files:
+To create a batch change, you need a **batch spec** describing the change. Here is an example batch spec that describes a batch change to add **Hello World** to all `README` files:
```yaml
version: 2
@@ -31504,9 +32320,9 @@ That can be useful if you want to update a single field in the batch spec, i.e.,
### Creating a batch change in a different namespace
-Batch changes are uniquely identified by their name and namespace. The namespace can be any Sourcegraph username or the name of a Sourcegraph organization.
+Batch Changes are uniquely identified by their name and namespace. The namespace can be any Sourcegraph username or the name of a Sourcegraph organization.
-By default, batch changes will use your username on Sourcegraph as your namespace. To create batch changes in a different namespace, use the `-namespace` flag when previewing or applying a batch spec:
+By default, Batch Changes will use your username on Sourcegraph as your namespace. To create Batch Changes in a different namespace, use the `-namespace` flag when previewing or applying a batch spec:
```bash
src batch preview -f your_batch_spec.yaml -namespace
@@ -31575,13 +32391,13 @@ Congratulations, you ran your first batch change server-side 🎊
## Viewing batch changes
-You can view a list of all batch changes by clicking the **Batch Changes** icon in the top navigation bar:
+You can view a list by clicking the **Batch Changes** icon in the top navigation bar:

-## Filtering batch changes
+## Filtering Batch Changes
-You can also use the filters to switch between showing all open or closed batch changes.
+You can also use the filters to switch between showing all open or closed Batch Changes.

@@ -33612,6 +34428,10 @@ src search -json 'repo:pallets/flask error'
You can then consume the JSON output directly, add `--get-curl` to get a `curl` execution line, and more. See [the `src` CLI tool](https://sourcegraph.com/github.com/sourcegraph/src-cli) for more details.
+## Incomplete search results when using Graphql
+
+Sometimes, users find discrepancies in the number of results returned on UI compared to Graphql for the same query. This can be avoided by adding ```"count:all" ``` to the query used in Graphql. This ensures all the matching records are being fetched via Graphql.
+
@@ -34418,7 +35238,8 @@ Many of the metrics above are also available for Cody only. However, some user d
| Hours saved | The number of hours saved by Cody users, assuming 2 minutes saved per completion |
| Completions by day | The number of completions suggested by day and by editor. |
| Completion acceptance rate (CAR) | The percent of completions presented to a user for at least 750ms accepted by day, the editor, day, and month. |
-| Weighted completion acceptance rate (wCAR) | Similar to CAR, but weighted by the number of characters presented in the completion, and only counting those fully retained by the user for X minutes after accepting the completion, by the editor, day, and month. |
+| Weighted completion acceptance rate (wCAR) | Similar to CAR, but weighted by the number of characters presented in the completion, by the editor, day, and month. This assigns more "weight" to accepted completions that provide more code to the user. |
+| Completion persistence rate | Percent of completions that are retained or mostly retained (67%+ of inserted text) after various time intervals. |
| Average completion latency (ms) | The average milliseconds of latency before a user is presented with a completion suggestion by an editor. |
| Acceptance rate by language | CAR and total completion suggestions broken down by editor during the selected time |
@@ -34431,8 +35252,11 @@ Many of the metrics above are also available for Cody only. However, some user d
| Hours saved by chats | Total hours saved through Cody chat interactions during the selected time , assuming 5 minutes saved per chat |
| Cody chats by day | Daily count of chat interactions |
| Cody chat users | Daily count of chat users |
-| Prompts created, edited, and deleted by day | Daily count of prompt management activities, including creation, modification, and removal |
-| Users creating, editing, and deleting prompts by day | Number of unique users performing prompt management activities each day |
+| Lines of code inserted | Lines of code generated by Cody in chat that get applied, inserted, or pasted into the editor. Only VS Code is included in this metric for now |
+| Insert rate | Percent of code generated by Cody in chat that gets applied, inserted, or pasted into the editor. Only VS Code is included in this metric for now |
+| Chat apply & insert persistence rate | Percent of code inserted by Apply and Insert actions that are retained or mostly retained (67%+ of inserted text) after various time intervals |
+| Prompts created, edited, and deleted by day | Daily count of prompt management activities, including creation, modification, and removal |
+| Users creating, editing, and deleting prompts by day | Number of unique users performing prompt management activities each day |
### Command metrics (deprecated)
@@ -79907,8 +80731,8 @@ rate(src_embeddings_cache_miss_bytes[10m])
{/* DO NOT EDIT: generated via: bazel run //doc/admin/observability:write_monitoring_docs */}
-This document contains a complete reference of all alerts in Sourcegraph's monitoring, and next steps for when you find alerts that are firing.
-If your alert isn't mentioned here, or if the next steps don't help, ontact us](mailto:support@sourcegraph.com) for assistance.
+This document contains a complete reference of all alerts in Sourcegraph's monitoring and the next steps for finding alerts that are firing.
+If your alert isn't mentioned here, or if the next steps don't help, contact us at `support@sourcegraph.com` for assistance.
To learn more about Sourcegraph's alerting and how to set up alerts, see [our alerting guide](/admin/observability/alerting).
@@ -90007,7 +90831,7 @@ precise-code-intel-worker-9b69b5b59-z7xx4 0/1 CrashLoopBackOff 415
# PostgreSQL 12 to 16 Schema Drift
-In Sourcegraph versions `5.10.x` and `5.11.x` we support both PostgreSQL 12 and 16. However, Sourcegraph's database management tool `migrator` expects the database schema of the various Sourcegraph databases to be in an exact expected state. The upgrade from PostgreSQL 12 to 16 is opinionated and automatically mutates the schema without running our application defined migrations. Starting in Sourcegraph `5.10.0` we expect databases to be in PosttgresSQL 16 and as such our tooling will identify schema drift in PostgreSQL 12 databases. This drift does not impact the functionality of the Sourcegraph instance but will stop migrator's multiversion `upgrade` command and `autoupgrade` from executing.
+In Sourcegraph versions `5.10.x` and `5.11.x` we support both PostgreSQL 12 and 16. However, Sourcegraph's database management tool `migrator` expects the database schema of the various Sourcegraph databases to be in an exact expected state. The upgrade from PostgreSQL 12 to 16 is opinionated and automatically mutates the schema without running our application defined migrations. Starting in Sourcegraph `5.10.0` we expect databases to be in PostgresSQL 16 and as such our tooling will identify schema drift in PostgreSQL 12 databases. This drift does not impact the functionality of the Sourcegraph instance but will stop migrator's multiversion `upgrade` command and `autoupgrade` from executing.
The drift takes the following general form, dropping table prefixes to columns in views, and changing `uuid` types to `gen_random_uuid()`:
```diff
@@ -90184,11 +91008,11 @@ Diff:
## Solutions for Handling Schema Drift
-If you're confident that your instance is seeing database drift associated with the PG12 to PG16 upgrade, you can run a nultiversion upgrade via migrator `upgrade` or run `autoupgrade` using the following options.
+If you're confident that your instance is seeing database drift associated with the PG12 to PG16 upgrade, you can run a multiversion upgrade via migrator `upgrade` or run `autoupgrade` using the following options.
To run `autoupgrade` via the frontend, set the `SRC_AUTOUPGRADE_IGNORE_DRIFT=true` environment variable in the frontend container.
-To run migrators `upgrade` command add the `--skip-drift-check` flag to migrator's entrycommand as below:
+To run migrator's `upgrade` command add the `--skip-drift-check` flag to migrator's entrycommand as below:
```yaml
command: ['upgrade', '-from', '5.5.0', '-to', '5.10.0', '--skip-drift-check=true']
```
@@ -96956,7 +97780,7 @@ data:
password: ""
port: ""
user: ""
- pgsslmode: "require" # optional, enable if using SSL
+ sslmode: "require" # optional, enable if using SSL
---
apiVersion: v1
kind: Secret
@@ -96969,7 +97793,7 @@ data:
password: ""
port: ""
user: ""
- pgsslmode: "require" # optional, enable if using SSL
+ sslmode: "require" # optional, enable if using SSL
---
apiVersion: v1
kind: Secret
@@ -96982,7 +97806,7 @@ data:
password: ""
port: ""
user: ""
- pgsslmode: "require" # optional, enable if using SSL
+ sslmode: "require" # optional, enable if using SSL
```
The above Secrets should be deployed to the same namespace as the existing Sourcegraph deployment.
@@ -97019,7 +97843,7 @@ pgsql:
user: "new-user"
password: "new-password"
port: "5432"
- pgsslmode: "require" # optional, enable if using SSL
+ sslmode: "require" # optional, enable if using SSL
```
#### Using external Redis instances
@@ -100906,7 +101730,7 @@ To perform a multi-version upgrade via migrators [upgrade](/admin/updates/migrat
> *Note: you may add the `--dry-run` flag to the `command:` to test things out before altering the dbs*
3. Run migrator with `docker-compose up migrator`
- - Migrator `depends_on:` will ensure the databases are ready before attempting to run the migrator. Ensuring that database entry point scripts are run before the migrator attempts to connect to the databases. For users upgrading from a version earlier than `5.10.0`, a PostgreSQL version is required and will be performed automatically here. For more details, see [Upgradeing PostgreSQL](/admin/postgresql#upgrading-postgresql).
+ - Migrator `depends_on:` will ensure the databases are ready before attempting to run the migrator. Ensuring that database entry point scripts are run before the migrator attempts to connect to the databases. For users upgrading from a version earlier than `5.10.0`, a PostgreSQL version is required and will be performed automatically here. For more details, see [Upgrading PostgreSQL](/admin/postgresql#upgrading-postgresql).
**Example:**
```sh
@@ -103719,22 +104543,34 @@ The default style depends on the location of the notice.
# Private network configuration
-A **private network** refers to a secure network environment segregated from the public internet, designed to facilitate internal communications and operations within an organization. This network setup restricts external access, enhancing security and control over data flow by limiting exposure to external threats and unauthorized access.
-
-When deploying self-hosted Sourcegraph instances in private networks with specific compliance and policy requirements, additional configuration may be required to ensure all networking features function correctly. The reasons for applying the following configuration options depend on the specific functionality of the Sourcegraph service and the unique network and infrastructure requirements of the organization.
-
-The following is a list of Sourcegraph services and how and when each initiates outbound connections to external services:
-
-- **executor**: Sourcegraph [Executor](../executors) batch change or precise indexing jobs may need to connect to services hosted within an organization's private network
-- **frontend**: The frontend service communicates externally when connecting to external [auth providers](../auth), sending [telemetry data](../pings), testing code host connections, and connecting to [externally hosted](../external_services) Sourcegraph services
+## Overview
+A private network is your organization's secure, internal network space - separated from the public internet.
+Think of it as your company's own protected environment where internal systems can communicate safely,
+keeping your sensitive data and operations shielded from external access.
+
+When deploying self-hosted Sourcegraph instances in private networks with specific compliance and policy requirements,
+additional configuration may be required to ensure all networking features function correctly. The reasons for applying the following configuration options depend on the specific functionality of the Sourcegraph service and the unique network and infrastructure requirements of the organization.
+
+The following is a list of Sourcegraph services that initiate outbound connections to external services. Sourcegraph services not included in this list can be assumed to only connect to services within the Sourcegraph deployment's network segment:
+- **executor**: Sourcegraph [Executor](../executors) batch change or precise indexing jobs may need to connect to
+services hosted within an organization's private network
+- **frontend**: The frontend service communicates externally when connecting to:
+ * External [auth providers](../auth)
+ * Sending [telemetry data](../pings)
+ * Testing [code host connections](../code_hosts)
+ * Connecting to [externally hosted](../external_services) Sourcegraph services
+ * Connecting to external [LLM providers](../../cody/capabilities/supported-models) with Cody
- **gitserver**: Executes git commands against externally hosted [code hosts](../external_service)
- **migrator**: Connects to Postgres instances (which may be [externally hosted](../external_services/postgres)) to process database migrations
- **repo-updater**: Communicates with [code hosts](../external_service) APIs to coordinate repository synchronization
-- **worker**: Sourcegraph [Worker](../workers) run various background jobs that may require establishing connections to services hosted within an organization's private network
+- **worker**: Sourcegraph [Worker](../workers) run various background jobs that may require establishing connections to
+services hosted within an organization's private network
## HTTP proxy configuration
-All Sourcegraph services respect the conventional `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` environment variables for routing Sourcegraph client application HTTP traffic through a proxy server. The steps for configuring proxy environment variables will depend on your Sourcegraph deployment method.
+All Sourcegraph services respect the conventional `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` environment variables for
+routing Sourcegraph client application HTTP traffic through a proxy server. The steps for configuring proxy environment
+variables will depend on your Sourcegraph deployment method.
### Kubernetes Helm
@@ -103751,51 +104587,185 @@ executor|frontend|gitserver|migrator|repo-updater|worker:
value: "blobstore,codeinsights-db,codeintel-db,sourcegraph-frontend-internal,sourcegraph-frontend,github-proxy,gitserver,grafana,indexed-search-indexer,indexed-search,jaeger-query,pgsql,precise-code-intel-worker,prometheus,redis-cache,redis-store,repo-updater,searcher,symbols,syntect-server,worker-executors,worker,cloud-sql-proxy,localhost,127.0.0.1,.svc,.svc.cluster.local,kubernetes.default.svc"
```
-Failure to configure `NO_PROXY` correctly can cause the proxy configuration to interfere with local networking between internal Sourcegraph services.
-
-## Using private CA root certificates
-Some organizations maintain a private Certificate Authority (CA) for issuing certificates within their private network. When Sourcegraph connects to TLS encrypted service using a self-signed certificate that it does not trust, you will observe an `x509: certificate signed by unknown authority` error message in logs.
+
+After setting up your proxy in the override file, you may notice some pods like the frontend, gitserver and repo-updater failing health checks. In such a case, you'll need to add an '*' to the NO_PROXY environment variable. This should look like:
+
-In order for Sourcegraph to respect an organization's self-signed certificates, the private CA root certificate(s) will need to be appended to Sourcegraph's trusted CA root certificate list in `/etc/ssl/certs/ca-certificates.crt`.
+```
+- name: NO_PROXY
+ value: "blobstore,codeinsights-db,codeintel-db,sourcegraph-frontend-internal,sourcegraph-frontend,github-proxy,gitserver,grafana,indexed-search-indexer,indexed-search,jaeger-query,pgsql,precise-code-intel-worker,prometheus,redis-cache,redis-store,repo-updater,searcher,symbols,syntect-server,worker-executors,worker,cloud-sql-proxy,localhost,127.0.0.1,.svc,.svc.cluster.local,kubernetes.default.svc, *"
+```
-### Configuring sourcegraph-frontend to recognize private CA root certificates
-The following details the process for setting up the sourcegraph-frontend to acknowledge and trust a private CA root certificate for Sourcegraph instances deployed using [Helm](../deploy/kubernetes/helm). For any other Sourcegraph service that needs to trust an organization's private CA root certificate (including gitserver, repo-updater, or migrator), similar steps will need to be followed.
+### Docker Compose
-1. Copy out the existing `ca-certificates.crt` file from the sourcegraph-frontend container:
-```sh
-kubectl cp $(kubectl get pod -l app=sourcegraph-frontend -o jsonpath='{.items[0].metadata.name}'):/etc/ssl/certs/ca-certificates.crt sourcegraph-frontend-ca-certificates.crt
+Add the proxy environment variables your docker compose override file.
+```yaml
+services:
+ :
+ environment:
+ - HTTP_PROXY=http://proxy.example.com:8080
+ - HTTPS_PROXY=http://proxy.example.com:8080
+ - NO_PROXY='blobstore,caddy,cadvisor,codeintel-db,codeintel-db-exporter,codeinsights-db,codeinsights-db-exporter,sourcegraph-frontend-0,sourcegraph-frontend-internal,gitserver-0,grafana,migrator,node-exporter,otel-collector,pgsql,pgsql-exporter,precise-code-intel-worker,prometheus,redis-cache,redis-store,repo-updater,searcher-0,symbols-0,syntect-server,worker,zoekt-indexserver-0,zoekt-webserver-0,localhost,127.0.0.1'
```
-2. Concatenate the private CA root certificate to the `sourcegraph-frontend-ca-certificates.crt` file:
-```sh
-cat sourcegraph-frontend-ca-certificates.crt {private-ca-certificate.crt file} > ca-certificates.crt
+
+Failure to configure `NO_PROXY` correctly can cause the proxy configuration to interfere with
+local networking between internal Sourcegraph services.
+
+## Docker networking configuration
+If there is an IP conflict on between the host network and the Docker network, you may need to configure the docker CIDR
+range in the docker-compose override file.
+
+Additional information on docker networking can be found here:
+* [Docker networking overview](https://docs.docker.com/network/)
+* [Networking in Compose](https://docs.docker.com/compose/how-tos/networking/)
+
+## Trusting TLS certificates using internal PKI
+
+If your organization uses internal Public Key Infrastructure to manage TLS certificates, you may need to configure your
+Sourcegraph instance to trust your internal Root Certificate Authorities, so your instance can connect to other internal
+services, ex. code hosts, authentication providers, etc.
+
+This method offers several advantages:
+- Works consistently across both Cloud and self-hosted deployments
+- Requires minimal configuration changes
+- Can be managed entirely through the web UI
+- Maintains certificates in a centralized location
+- Aligns with enterprise PKI best practices
+
+The configuration process involves identifying and adding the public key of your organization's root Certificate
+Authority (CA) to Sourcegraph's site configuration. This approach is particularly efficient because:
+* Root CA certificates typically have long expiration periods (often measured in years)
+* A single root CA certificate usually covers multiple internal services
+* The configuration can be managed without container modifications or filesystem changes
+
+### Obtain the certificate chain
+Use the OpenSSL command to extract the certificate chain from your code host.
+Replace the domain and port with your internal code host's values:
+
+```bash
+openssl s_client -showcerts -connect example.com:8443 \
+-nameopt lname < /dev/null > certs.log 2>&1
```
-3. Create a new Kubernetes ConfigMap containing the concatenated `ca-certificates.crt` file:
-```sh
-kubectl create configmap sourcegraph-frontend-ca-certificates --from-file=ca-certificates.crt
+
+### Identify the root certificate
+In the generated `certs.log` file, locate the root CA certificate:
+
+Certificate chains typically include 3 certificates:
+
+* Root certificate authority (depth=2)
+* Intermediate certificate authority (depth=1)
+* Server (leaf) certificate (depth=0)
+
+The last certificate in the chain will be the root CA certificate and will typically have:
+
+* A long expiration period (years)
+* A descriptive common name (e.g., "Enterprise Root CA 2023")
+
+Example root CA certificate for github.com:
+
+```text
+Connecting to 140.82.114.3
+depth=2 countryName=US, stateOrProvinceName=New Jersey, localityName=Jersey City, organizationName=The USERTRUST Network, commonName=USERTrust ECC Certification Authority
+verify return:1
+depth=1 countryName=GB, stateOrProvinceName=Greater Manchester, localityName=Salford, organizationName=Sectigo Limited, commonName=Sectigo ECC Domain Validation Secure Server CA
+verify return:1
+depth=0 commonName=github.com
+verify return:1
+CONNECTED(00000005)
+---
+...
+ 2 s:countryName=US, stateOrProvinceName=New Jersey, localityName=Jersey City, organizationName=The USERTRUST Network, commonName=USERTrust ECC Certification Authority
+ i:countryName=GB, stateOrProvinceName=Greater Manchester, localityName=Salford, organizationName=Comodo CA Limited, commonName=AAA Certificate Services
+ a:PKEY: id-ecPublicKey, 384 (bit); sigalg: RSA-SHA384
+ v:NotBefore: Mar 12 00:00:00 2019 GMT; NotAfter: Dec 31 23:59:59 2028 GMT
+-----BEGIN CERTIFICATE-----
+MII...c=
+-----END CERTIFICATE-----
```
-4. Mount the `sourcegraph-frontend-ca-certificates` ConfigMap to the sourcegraph-frontend Deployment:
-```yaml
-frontend:
- extraVolumes:
- - name: ca-certificates
- configMap:
- name: sourcegraph-frontend-ca-certificates
- extraVolumeMounts:
- - name: ca-certificates
- mountPath: /etc/ssl/certs/
+
+### Format the certificate
+Once you've identified the root CA certificate:
+
+* Extract the certificate content including the BEGIN and END markers.
+* Format the certificate for the site configuration:
+ * Replace newlines with \n characters
+ * Enclose the entire certificate in double quotes
+ * Add a trailing comma
+
+
+The following command can be used to easily obtain, extract, and format the root certificate from a 3 certificate chain.
+Be sure to adjust the hostname and port to match your internal code host. If your certificate chain is of a different
+depth, adjust the awk command accordingly. `awk '/BEGIN CERTIFICATE/{i++} i==X'`
+ ```bash
+openssl s_client -showcerts -connect example.com:8443 \
+-nameopt lname < /dev/null 2>&1 \
+| awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/' \
+| awk '/BEGIN CERTIFICATE/{i++} i==2' \
+| awk '{printf "%s\\n", $0}' | sed 's/\\n$//' \
+| awk '{print "\"" $0 "\","}'
```
-Once deployed, you should see the private CA root certificate in the sourcegraph-frontend container's `/etc/ssl/certs/ca-certificates.crt` file.
-```sh
-kubectl exec -it $(kubectl get pod -l app=sourcegraph-frontend -o jsonpath='{.items[0].metadata.name}') -- tail /etc/ssl/certs/ca-certificates.crt
+### Add the certificate to the site configuration
+Add the formatted certificate to your Sourcegraph site configuration.
+
+```json
+{
+ "experimentalFeatures": {
+ "tls.external": {
+ "certificates": [
+ "-----BEGIN CERTIFICATE-----\naZ...==\n-----END CERTIFICATE-----"
+ ]
+ }
+ }
+}
```
-You can verify that the self-signed certificate is trusted using `curl`:
-```sh
-kubectl exec -it $(kubectl get pod -l app=sourcegraph-frontend -o jsonpath='{.items[0].metadata.name}') -- curl -v {https://internal.service.example.com} > /dev/null
+For organizations with multiple root CAs (uncommon), additional certificates can be added to the array:
+```json
+{
+ "experimentalFeatures": {
+ "tls.external": {
+ "certificates": [
+ "-----BEGIN CERTIFICATE-----\naZ...==\n-----END CERTIFICATE-----",
+ "-----BEGIN CERTIFICATE-----\nMI...I7\n-----END CERTIFICATE-----"
+ ]
+ }
+ }
+}
```
-It is recommended to repeat these steps on a regular cadence to ensure that Sourcegraph's CA root certificate list stays up to date.
+### Validation of certificate configuration
+These steps confirms that configuring the root CA certificate through `tls.external` is sufficient for all standard
+Sourcegraph operations that require secure connections to internal services.
+
+ 1. **Code host connectivity**
+ - Verify using the UI "Test Connection" button
+ - Trigger validate completed sync jobs
+ Executed by: frontend service
+
+ 2. **Repository operations**
+ - Verify individual repository synchronization
+ - Verify cloning operations
+ Executed by: gitserver service
+
+ 3. **Permission synchronization**
+ - Verify user-centric permission sync jobs
+ Executed by: worker service
+
+
+Repository-centric permission sync jobs are expected to behave identically, as they use the same underlying TLS configuration mechanisms.
+
+
+### Recommended best practices
+* Only include root CA certificates, not intermediate or server certificates.
+* Avoid using `insecureSkipVerify: true` and add TLS certificates if needed, as it bypasses important security checks.
+* Document certificate sources and expiration dates in your organization's runbooks.
+* Plan for certificate rotation well before root CA expiration.
+* Most enterprises use a single root CA, so adding one certificate often covers all internal services.
+* Keep the certificate list minimal and well-maintained.
+
+
+
@@ -109605,17 +110575,17 @@ Ensure the following values are set for the application configuration in the ide
## 1. Add an unlisted (non-gallery) application to your Microsoft Entra ID organization
1. In Microsoft Entra ID, create an unlisted (non-gallery) application [following the official documentation](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-non-gallery-app).
-1. Once the application is created, follow [these instructions to enable SAML SSO](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-single-sign-on-non-gallery-applications). Use these configuration values (replacing "sourcegraph.example.com" with your Sourcegraph instance URL):
- * **Identifier (Entity ID):** `https://sourcegraph.example.com/.auth/saml/metadata`
- * **Reply URL (Assertion Consumer Service URL):** `https://sourcegraph.example.com/.auth/saml/acs`
- * **Sign-on URL, Relay State, and Logout URL** can be left empty.
- * **User Attributes & Claims:** Add the following attributes.
- - `emailaddress`: user.mail (required)
- - `name`: user.userprincipalname (optional)
- - `login`: user.userprincipalname (optional)
- * **Name ID**: `email`
- * You can leave the other configuration values set to their defaults.
-1. Record the value of the "App Federation Metadata Url". You'll need this in the next section.
+2. Once the application is created, follow [these instructions to enable SAML SSO](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-single-sign-on-non-gallery-applications). Use these configuration values (replacing "sourcegraph.example.com" with your Sourcegraph instance URL):
+ * **Identifier (Entity ID):** `https://sourcegraph.example.com/.auth/saml/metadata`
+ * **Reply URL (Assertion Consumer Service URL):** `https://sourcegraph.example.com/.auth/saml/acs`
+ * **Sign-on URL, Relay State, and Logout URL** can be left empty.
+ * **User Attributes & Claims:** Add the following attributes.
+ - `emailaddress`: user.mail (required)
+ - `name`: user.userprincipalname (optional)
+ - `login`: user.userprincipalname (optional)
+ * **Name ID**: `email`
+ * You can leave the other configuration values set to their defaults.
+3. Record the value of the "App Federation Metadata Url". You'll need this in the next section.
## 2. Add the SAML auth provider to Sourcegraph site config
@@ -109635,8 +110605,7 @@ Ensure the following values are set for the application configuration in the ide
}
```
-> NOTE: Optional, but recommended: [add automatic provisioning of users with SCIM](/admin/scim).
-
+Recommended: [add automatic provisioning of users with SCIM](/admin/scim).