diff --git a/public/llms.txt b/public/llms.txt index 4582303b7..17903b752 100644 --- a/public/llms.txt +++ b/public/llms.txt @@ -5,6 +5,465 @@ This page documents all notable changes to Sourcegraph. For more detailed change {/* CHANGELOG_START */} +# 6.3 Patch 0 + +## v6.3.0 + +- [sourcegraph](https://github.com/sourcegraph/sourcegraph/releases/tag/v6.3.0) + +- [docker-compose](https://github.com/sourcegraph/deploy-sourcegraph-docker/releases/tag/v6.3.0) + +- [helm](https://github.com/sourcegraph/deploy-sourcegraph-helm/releases/tag/v6.3.0) + +- [kustomize](https://github.com/sourcegraph/deploy-sourcegraph-k8s/releases/tag/v6.3.0) + +### Features + +#### Agents + +- Add true positive and impact filters `(PR #4911)` +- Improve task page, abstract out changeset header, add rerun buttons `(PR #4834)` +- Implement agentic chat on top of conversation API `(PR #4809)` +- New agent action menu added `(PR #4761)` +- Allow grader to rewrite or relocate diagnostics [AGENT-302] `(PR #4728)` + - The grader service can now fix broken diagnostics that come out of a review, hopefully improving quality. +- Add additional graphs and metrics to agent overview `(PR #4709)` +- Reformat overview stats, add user list and CSS tidy ups `(PR #4668)` +- Add feedback page `(PR #4656)` +- Add filters for reviews and diagnostic pages `(PR #4564)` +- Improved agents page `(PR #4557)` +- Add review statuses everywhere `(PR #4520)` +- Generic query param parser `(PR #4434)` +- Add default rules `(PR #4418)` + - New default for the review setting `rules: ['builtin-rules', 'repo-rules']`. Builtin rules allow you to start using the review agent without creating any `*.rule.md` files. +- Add a dropdown to access sibling reviews from the changeset `(PR #4375)` +- Grade diagnostics before posting [AGENT-196] `(PR #4205)` + - Adds reflection to the review process to filter out bad diagnostics before posting. + +#### Auto-Edit + +- Add auto-edit long suggestion model `(PR #4965)` +- Use user's sourcegraph instance endpoint for authentication `(PR #4759)` +- Create a new fireworks backend for fine-tunes-proxy `(PR #4661)` +- Implement websocket upgrade logic for the fine-tuned proxy proxy `(PR #4582)` +- Create a new fireworks proxy service for Cody auto-edit `(PR #4538)` + +#### Batch Changes + +- Add search for batch changes template library `(PR #4280)` + +#### Code Intelligence + +- Dont upload empty syntactic indexes `(PR #4738)` + +#### Cody + +- Add OpenAI o3 and o4-mini models `(PR #4958)` +- Add OpenAI 4.1 models support `(PR #4926)` +- Add gemini 2.5 preview support `(PR #4802)` +- Add entitlements UI to site admin `(PR #4742)` +- Support tool results for anthropic and gemini `(PR #4493)` +- Update context limits (CODY-5022) `(PR #4321)` + +#### Cody-Gateway + +- Add gemini 2.5 flash preview support `(PR #4970)` + +#### Codygateway + +- Block Enterprise usage of Google models `(PR #4534)` + +#### Completions + +- Enable system prompts for all newer Claude models `(PR #4989)` + +#### Gitserver + +- Add heuristic and eager strategies from Gitaly `(PR #4588)` + +#### Graph + +- Syntactic for C# `(PR #4236)` + +#### Msp + +- Allow configurable Cloud SQL version `(PR #4959)` + +#### Multi Tenant + +- Add coupon sale percentage UI `(PR #4699)` + +#### Release + +- Add a sg release steps command `(PR #4206)` + - New internal release command `sg release steps` to be used in pipeline gen + +#### Search + +- Diff comparison page UI enhancements `(PR #5097)` + - Users can now filter diffs on the diff comparison page + Backport a2725da3281de64d2d2e41438222d0f38700e441 from #4398 +- Expose ENVs for search jobs config `(PR #4975)` + - +- Add tool selection and stats to deep search `(PR #4744)` +- Add support for chaining multiple filePaths in URL `(PR #4333)` + - Add support for chaining multiple filePaths together in URL + +#### Searchplatform + +- Deep Search Client Side plumbing `(PR #4531)` + +#### Source + +- Gitserver: add env var for forcing all janitor optimizations to use eager strategy for debug purposes `(PR #4949)` +- Support sub-repo perms for all repo types `(PR #4935)` +- Gitserver: add new chaos testing for git maintenance commands `(PR #4929)` +- Add prometheus dashboards and logs for new janitor `(PR #4861)` +- Adapt Gitaly's stats package and adapt test suite to run in our codebase `(PR #4365)` + - N/A + +#### Telemetry + +- Add `billingMetadata` to batch change events `(PR #3732)` + +#### Workspaces + +- Show admin analytics menulink `(PR #5027)` + - Backport 0b9ee7390bca046462737172c39d8ad2e13b99fb from #4652 + +#### Others + +- Syntactic indexing support for C++ `(PR #4606)` + - Adds support for syntactic indexing for C++ +- Unsigned commits warning `(PR #4525)` + - It is possible for users to have `rejectUnverifiedCommits` site configuration enabled, and also apply changesets without having the necessary configurations for commit signing. This change provides a warning banner during the batch set preview stage in such a case.With commit signing fully configuredWarning showing because no GitHub commit signing configuredimageimage +- Expose relationships through GraphQL API `(PR #4330)` +- Add metrics for periodic goroutines `(PR #4317)` + +### Fix + +#### Agents + +- Delete agents codebase `(PR #4982)` +- Agent intro banner css tweaks `(PR #4912)` +- Fix chunker calculation bug `(PR #4904)` +- Switch to non-thinking agent for diagnostic grader `(PR #4903)` +- Use empty snippets when old revision cannot be found `(PR #4902)` +- Enable new chat pages for old UI `(PR #4896)` +- Update review and task page headers to more clearly show states `(PR #4891)` +- Fixes commit sha badge on review item `(PR #4866)` +- Show correct line range for diagnostics on `diagnostics` page `(PR #4860)` +- Change changeset ordering by default `(PR #4840)` +- Basic tidy ups for conversations page `(PR #4829)` +- Improve changesets page `(PR #4828)` +- Improve styling of repo page `(PR #4827)` +- Minor CSS updates to `rule` page `(PR #4826)` +- Improve `rules` page UX `(PR #4822)` +- Post process diagnostics based on rule filters (AGENT-15) `(PR #4808)` +- Tweaks to agents page `(PR #4757)` +- CSS tweaks to the diagnostic page `(PR #4754)` +- Handle empty impact string `(PR #4741)` +- Improve page layout, agent navigation and breadcrumbs `(PR #4727)` +- Minor spacing issues with layout `(PR #4712)` +- Minor visual improvements to reviews page `(PR #4711)` +- Rename fix rate to true positives `(PR #4706)` +- Link to reviews when available `(PR #4698)` +- Only show review tasks in latest runs `(PR #4686)` +- Limit CSS height on authors filter `(PR #4673)` +- Don't fail fast on missing revisions `(PR #4672)` +- Skip diagnostics for hallucinated file paths `(PR #4608)` +- Improved GitHub app creation step `(PR #4586)` +- Improve review cards `(PR #4580)` +- Rename agent "run" to agent "task" `(PR #4567)` +- Updates run/review card linking `(PR #4560)` +- Don't fail fast on invalid rule tag `(PR #4555)` +- Make report diagnostic tool skippable `(PR #4542)` +- Allow models to have "tools" capability in site-config `(PR #4529)` +- Improve run animation `(PR #4523)` +- Improve visually the review cards list `(PR #4505)` +- Filter out diagnostics outside the diff (AGENT-6) `(PR #4283)` + +#### Auto-Edit + +- Fix invalid JSON string `(PR #4864)` +- Fix incorrect naming of environment variables `(PR #4670)` + +#### Billing + +- Prevent applying coupons that expire before subscription period ends `(PR #4244)` + +#### Ci + +- Fix syntax in node heap size override `(PR #4595)` +- Remove error logging to avoid OOMs in CI for react integration tests `(PR #4424)` + +#### Code Nav + +- Add client side caching for file content `(PR #5073)` + +#### Completions + +- Track context token usage in completions client `(PR #4208)` + +#### Dev + +- Fix running storybook `(PR #5013)` +- Fix 'pnpm build' command in web-sveltekit `(PR #4851)` + +#### Entitlements + +- Fix interval handling, rename to window `(PR #4495)` + +#### Github + +- Synchronize at least 100 GitHub issue/PR comments, not 30 `(PR #4410)` + - When syncing conversations from GitHub, we now fetch 100 comments by default instead of the default 30 comments + +#### Modelconfig + +- Fix gemini-1.5-flash cost `(PR #4441)` + +#### Multi Tenant + +- Don't show no seats left tooltip always `(PR #4688)` +- Do not show upsell banner when we have user space `(PR #4413)` +- Do not show error in loading state in coupon field `(PR #4370)` + +#### Release + +- Set development branch name by default `(PR #4357)` + - NA + +#### Search + +- Fix global navigation grow/shrink functionality `(PR #5062)` +- Improve global navigation overflowing `(PR #4804)` +- Limit client range highlights `(PR #4354)` + +#### Source + +- Bump retry timeout on gqlutil tests to lessen flakiness `(PR #4952)` +- Fix error that happens when adding an existing GitHub App `(PR #4558)` + - Fixed an error where Sourcegraph would display an error when an existing GitHub App was added in Site Admin, even though the adding succeeded. +- Bump default user permission back-off time `(PR #4297)` + +#### Style + +- Fix a gomft issue that made it to main `(PR #4535)` + +#### Others + +- Create wrapper to reduce risk of `NewGaugeFunc` causing deadlocks `(PR #4960)` +- Do not hard-fail when calculating file stats for large files `(PR #4950)` +- Do not use store methods, which can deadlock because they make observation `(PR #4944)` +- [SRCH-1387] workspace refresh caching using incorrect key `(PR #4859)` + - Fixes issue [https://linear.app/sourcegraph/issue/SRCH-1387/chime-workspaces-not-updating](https://linear.app/sourcegraph/issue/SRCH-1387/chime-workspaces-not-updating)The workspace preview components rely on the apollo client cache to coordinate the state. Queries are made in two places; +- /client/web/src/enterprise/batches/batch-spec/edit/workspaces-preview/useWorkspacesPreview.ts +- /client/web/src/enterprise/batches/batch-spec/edit/workspaces-preview/useWorkspaces.tsThe desired effect is that both queries are cached under the same cache-key by apollo, and so when a query finds new results it triggers a component refresh with the new data.The problem is that there is a subtle difference in the variables passed to the query - One has `after: null` and on does not provide the `after` field. This means that they are treated as different queries and the cache entries for the queries are separate. +- Update auto-updating script to use correct scip-typescript Docker tag `(PR #4722)` + - Switch to Debian-based auto-indexing Docker image for scip-typescript, and official Node builds. The new image is compatible with recent Node versions (verified: 23.11.0). +- Add docs link for github apps `(PR #4563)` + +### Chore + +#### Agents + +- Limit tool calls in grader `(PR #4813)` + - Added `ToolCallLimit` to `InternalCompletionParams` for limiting the number of allowed LLM tool calls in a loop. +- Always trace and show traceID in run logs `(PR #4685)` +- Enable string-enums in go-typespec code generation `(PR #4599)` +- Bazel build hygiene `(PR #4556)` +- Minor changes for localhost development `(PR #4554)` +- Adds more information into de-duplicate error `(PR #4385)` + +#### Ci + +- Add test github action for partially published changesets flow `(PR #4807)` +- Refactor web-sveltekit build process `(PR #4382)` +- Bump typescript in bazel to 5.4.2 `(PR #4367)` +- Update prechecks to handle bzl mod tidy and go mod tidy `(PR #4288)` + +#### Database/Entitlements + +- Test entitlements grants deletion `(PR #4838)` + +#### Dev + +- Cleanup client/vscode dependencies `(PR #4781)` +- Cleanup client/ui dependencies `(PR #4769)` +- Cleanup client/testing dependencies `(PR #4768)` +- Cleanup client/template-parser dependencies `(PR #4714)` +- Cleanup client/shared dependencies `(PR #4696)` +- Cleanup client/observability-server dependencies `(PR #4695)` +- Cleanup client/observability-client dependencies `(PR #4694)` +- Cleanup client/jetbrains dependencies `(PR #4693)` +- Cleanup client/http-client dependencies `(PR #4692)` +- Cleanup client/common dependencies `(PR #4666)` +- Cleanup client/codeintellify dependencies `(PR #4665)` +- Cleanup client/client-api dependencies `(PR #4664)` +- Cleanup client/build-config depdendencies `(PR #4663)` +- Cleanup client/browser dependencies `(PR #4561)` +- Cleanup client/branded dependencies `(PR #4552)` +- Cleanup unused dependencies `(PR #4516)` + +#### Entitlements + +- Hide site admin navbar item behind feature flag `(PR #5058)` + +#### Gateway + +- Use Authorization: Bearer as header for reranker `(PR #4513)` +- Change URL for reranker model `(PR #4511)` + +#### Local + +- Use docsite 1.9.6 `(PR #3823)` + +#### Release + +- Improve upgrade error message for out of policy upgrade attempts `(PR #4943)` + - Improve migrator error for invalid upgrade ranges + +#### Search + +- Batch Changes PAT dialog - Fix wording `(PR #4660)` +- Migrate filter UI to svelte 5 `(PR #4479)` + +#### Searchplatform + +- Add deep search route to backend `(PR #4581)` + +#### Security + +- Update to s3proxy 2.6.0 `(PR #4515)` + +#### Telemetrygateway + +- Remove old proto symlink `(PR #4584)` + +#### Workspaces + +- Inject GoogleTagManagerContainer when serving index.html `(PR #4956)` + +#### Others + +- Update third-party licenses `(PR #4967)` +- Remove unused pipeline job implementation `(PR #4951)` +- Remove unused dbworker option `(PR #4920)` +- Update third-party licenses `(PR #4919)` +- Fix batch change codehost links `(PR #4707)` + - Fix some broken links to documentation +- Update third-party licenses `(PR #4671)` +- Remove non-nil pre-condition for pagination function `(PR #4473)` +- Document nil propagation for PaginationArgs `(PR #4468)` +- Summarize linked Slack thread in code `(PR #4460)` +- Update third-party licenses `(PR #4448)` +- Release anish from changelog duties `(PR #4435)` +- Change default to match documented default `(PR #4429)` +- Remove search from gitserver `(PR #4420)` +- Bump up page size for repo cleanup scheduling `(PR #4419)` +- Mark TotalCount method as unreachable `(PR #4254)` +- Remove large.String -> replace back with []byte `(PR #4146)` + +### Refactor + +#### Others + +- Remove deprecated batchChangePreview component `(PR #4384)` + - Refactor to remove old duplicated component + +### Reverts + +- Revert pnpm upgrade (back to v9) `(PR #-1)` +- Revert "Revert "fix(cody-gateway): migrate Google client from from Gemini to Vertex API"" `(PR #4705)` + +### Uncategorized + +#### Others + +- [Backport 6.3.x] Add ability for admins to set the default context for all users within an instance `(PR #5052)` +- Authz: Don't error when external account isn't usable for sync `(PR #5011)` +- Perforce: Remove top-level maxChanges setting `(PR #5010)` +- Gitserver: Observe vcs syncer from other shard `(PR #5009)` +- Gitserver: Rename variable and remove experimental disclaimer for v2 janitor `(PR #5008)` +- Add KUBERNETES_IMAGE_PULL_POLICY environment variable for the Executor service to allow setting the image pull policy `(PR #4995)` + - Acepted values for KUBERNETES_IMAGE_PULL_POLICY: + - Always - Kubelet always attempts to pull the latest image. Container will fail If the pull fails. + - Never - Kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present. + - IfNotPresent - Kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. +- Bump zoekt `(PR #4991)` +- Gitserver: Add option to disable janitor `(PR #4980)` +- Perforce: Don't drop rules for proxy catchalls `(PR #4978)` +- Perforce: Make changelist mapper recover automatically `(PR #4954)` + - Fixed an issue where perforce changelist mapping could fall into an unrecoverable state after recloning. +- Batches: Don't use stateful periodic goroutine `(PR #4946)` +- Refactor cross domain login service to allow re-use in other services `(PR #4942)` + - Moved userauth and xdomain login service from workspaces to managed services, to allow import from other services + - Refactored userauth and xdomain login service to not be workspaces-specific, removing all workspaces references and allowing to specify the path for cross-domain login.## Test plan + Tetsted changes locally from workspaces service +- Workerutil: Streamline heartbeat and stalled intervals `(PR #4934)` +- Workerutil: Streamline maximum resets `(PR #4932)` +- Fix(agents) use .at(0) instead of [0] for safe array access `(PR #4918)` +- Mail-gatekeeper: increment abuse score for Google abusers `(PR #4916)` +- Janitor: Don't schedule jobs too far into the future `(PR #4906)` +- Workerutil: Unify settings for resetter interval `(PR #4870)` +- Workerutil: Simplify resetter metrics `(PR #4865)` +- Contributors: Fix updating last_processed `(PR #4841)` +- Gitserver: Fix error parsing empty author/committer times `(PR #4832)` +- Contributors: Add missing resetter job `(PR #4831)` +- Tenant: Add deletion routine for searcher cache `(PR #4821)` +- Replace python with explicit python3 for pre-commit hook `(PR #4803)` +- Feat/Deep search: Show reasoning steps during execution `(PR #4795)` +- Database: Add missing indexes for repo hard deletions `(PR #4789)` +- Bug(agents): fixed spend page formatting `(PR #4763)` +- Repoupdater: Initialize subrepoperms `(PR #4756)` +- Bug(agents): fixed repo id bug `(PR #4755)` +- Analytics(telemetry): trim whitespace in comma separated list `(PR #4750)` +- Increase the max tokens to sample `(PR #4749)` +- Adds github only badge to plans page (and style tweaks) `(PR #4747)` +- Change the smart apply deployment to arizona region `(PR #4702)` +- Dotcom: Some cleanups after migration `(PR #4683)` +- Gitserver: Fix panic in ListRepositories error handling `(PR #4682)` +- Telemetry: Add DB index to optimize sorting in memory `(PR #4667)` +- Gitserver: Update error filter for backend metrics `(PR #4629)` +- Mail-gatekeeper: gRPC client and DNS improvements `(PR #4628)` +- Gitserver: Always disable gc.auto and maintenance.auto `(PR #4613)` +- Perforce: Fixup changelist ID parsing `(PR #4597)` +- Gitserver: Add gRPC method and basic UI for repo stats `(PR #4544)` +- Gitserver: Add OptimizationStrategy interface and runner for it `(PR #4540)` +- Gitserver: Add CleanStaleData function for new janitor `(PR #4532)` +- Gitserver: Address a few compiler warnings `(PR #4458)` +- Gitserver: Improve configuration for fetch `(PR #4439)` +- Gitserver: Implement maintenance methods in Git backend `(PR #4438)` +- Gitserver: Fix bad observable owner `(PR #4437)` +- Gitserver: Cleanup leftovers of coursier `(PR #4436)` +- Add no results page tutorial info `(PR #4432)` +- Add telemetry `(PR #4274)` +- Update subscription page `(PR #4241)` +- Database: Add primary keys to all tables `(PR #4144)` +- Gitserver: Implement scheduler for janitorial tasks `(PR #2519)` + +### Untracked + +The following PRs were merged onto the previous release branch but could not be automatically mapped to a corresponding commit in this release: + +- [backport 6.2] perforce: Don't drop rules for proxy catchalls (#4978) `(PR #4993)` +- [backport 6.2.x] repoupdater: Initialize subrepoperms (#4756) `(PR #4760)` +- [6.2.x] Auto-update all packages in Sourcegraph container images `(PR #4723)` +- [6.2.x] Auto-update all packages in Sourcegraph container images `(PR #4601)` +Revert "Revert "[Backport 6.2.x] fix(agents): filter out empty diagnostic paths"" `(PR #4427)` +Revert "Revert "[Backport 6.2.x] fix(agents): make reviews and diagnostics pages order by -created_at"" `(PR #4428)` +- [6.2.x] Auto-update all packages in Sourcegraph container images `(PR #4466)` +- Add support for chaining multiple filePaths in URL (#4333) `(PR #4433)` +Revert "[Backport 6.2.x] fix(agents): make reviews and diagnostics pages order by -created_at" `(PR #4403)` +Revert "[Backport 6.2.x] fix(agents): filter out empty diagnostic paths" `(PR #4409)` +- Update context limits (CODY-5022) (#4321) `(PR #4426)` +- Add release as branch code owners `(PR #4421)` + - N/A - Not customer facing + +{/* RSS={"version":"v6.3.0", "releasedAt": "2025-04-30"} */} + + # 6.2 Patch 3 ## v6.2.3841 @@ -9052,6 +9511,7 @@ Currently supported versions of Sourcegraph: | **Release** | **General Availability Date** | **Supported** | **Release Notes** | **Install** | |--------------|-------------------------------|---------------|--------------------------------------------------------------------|------------------------------------------------------| +| 6.3 Patch 0 | April 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v630) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.2 Patch 3 | April 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v623841) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.2 Patch 2 | April 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v622553) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.2 Patch 1 | April 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v621106) | [Install](https://sourcegraph.com/docs/admin/deploy) | @@ -9130,6 +9590,7 @@ These versions fall outside the release lifecycle and are not supported anymore: + - [6.2](https://6.2.sourcegraph.com) - [6.1](https://6.1.sourcegraph.com) - [6.0](https://6.0.sourcegraph.com) @@ -9621,9 +10082,9 @@ Sourcegraph requires cloud-based services to power its AI features. For customer Pro and Enterprise Starter plans are billed monthly and can be paid with a credit card. -## How are active users counted and billed? +## How are active users counted and billed for Cody? -This only applies to Enterprise contracts. Pro and Enterprise Starter users pay for a seat every month, regardless of usage. +This only applies to Cody Enterprise contracts. Cody Pro and Enterprise Starter users pay for a seat every month, regardless of usage. ‍A billable user is one who is signed in to their Enterprise account and actively interacts with the product (e.g., they see suggested autocompletions, run commands or chat with Cody, start new discussions, clear chat history, or copy text from chats, change settings, and more). Simply having Cody installed is not enough to be considered a billable user. @@ -9676,6 +10137,10 @@ On the **Workspace settings > General settings** page, you can delete your works Your subscription renewals are scheduled to happen on the same day of the month. On shorter months (e.g., day 31 on April, which only has 30 days), the last day of the month will be the subscription renewal day instead. +## How do I access my invoices? + +You can access your invoices via the **Workspace settings > Billing** page by clicking the **View invoices** button, which takes you to the Stripe Customer Portal. Note that invoices are not emailed every month. + ## How do I pay my invoice if my subscription is past due? After updating or resolving your payment method issue that occurred during the automatic subscription renewal, you may do one of the following to pay the invoice for your past-due subscription: @@ -9793,7 +10258,7 @@ Workspaces on the Enterprise Starter plan are billed monthly based on the number If you fail to make the payment after the grace period, your workspace will be deleted, and you will not be able to recover your data. -Please also see [Billing FAQs](billing-faqs.mdx) for more FAQs, including how to downgrade Enterprise Starter. +Please also see [FAQs](faqs.mdx) for more FAQs, including how to downgrade Enterprise Starter. ## Features supported @@ -12590,7 +13055,7 @@ We don't offer refunds, but if you have any queries regarding the Cody Pro plan, ### How do I access previous invoices? -You can access your invoices via the [Cody Dashboard](https://sourcegraph.com/cody/manage) and clicking "Manage Subscription". +You can access your invoices via the [Cody Dashboard](https://sourcegraph.com/cody/manage) and clicking "Manage Subscription". Note that invoices are not emailed every month. ## Enterprise Starter @@ -12681,14 +13146,14 @@ If you're experiencing issues with Cody not responding in chat, follow these ste If you're experiencing issues with Cody's responses or completions being too slow: - Ensure you have the latest version of the [Cody VS Code extension](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai). Use the VS Code command `Extensions: Check for Extension Updates` to verify -- Enable verbose logging, restart the extension and reproduce the issue seen again (see below `Access Cody logs` for how to do this +- Enable verbose logging, restart the extension and reproduce the issue seen again (see below `Access Cody logs` for how to do this) - Send information to the our Support Team at support@sourcegraph.com Some additional information that will be valuable: + - Where are you located? Any proxies or firewalls in use? - Does this happen with multiple providers/models? Which models have you used? - ### Access Cody logs VS Code logs can be accessed via the **Outputs** view. You will need to set Cody to verbose mode to ensure important information to debug is on the logs. To do so: @@ -12900,6 +13365,52 @@ This issue occurs because JCEF isn't supported in Android Studio and causes Cody 1. Select the last option. 1. Restart Android Studio. +## Visual Studio extension + +### Access Cody logs + +Visual Studio logs can be accessed via the **Output** panel. To access logs: + +- Open `View -> Output` from the main menu +- Select `Cody` output pane + +![Cody Default Output](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-ouput.png) + +### Autocomplete + +Cody autocomplete for Visual Studio uses the underlying VS API to display completions in the editor. It's turned on by default in VS (`Tools -> Options -> IntelliCode -> Show Inline Completions`). Without this setting enabled, autocomplete will not work, so this is the first thing to check. + +![Inline completions](https://storage.googleapis.com/sourcegraph-assets/Docs/Inline-completions.png) + +Also make sure that `Tools -> Options -> Cody -> Automatically trigger completions` is turned on (it is by default). + +![Autocomplete](https://storage.googleapis.com/sourcegraph-assets/Docs/Autocomplete.png) + +Autocomplete is supported from Visual Studio 17.6+ and includes support for the following languages: + +1. C/C++/C# +2. Python +3. JavaScript/TypeScript/TSX +4. HTML +5. CSS +6. JSON + +#### Non-trusted certificates + +If autocomplete still doesn't work (or the Cody Chat), you could try **turning on** the option to `accept non-trusted certificates` (requires Visual Studio restart). This should help, especially in enterprise settings if you are behind a firewall. + +![Non-trusted-certificates](https://storage.googleapis.com/sourcegraph-assets/Docs/Non-trusted-certificates.png) + +### Detailed debugging logs + +The detailed logging configuration can be turned on by adding the `CODY_VS_DEV_CONFIG` environment variable containing the full path to [the configuration file](https://github.com/sourcegraph/cody-vs/blob/main/src/CodyDevConfig.json) placed somewhere in the filesystem. + +![Detailed logs](https://storage.googleapis.com/sourcegraph-assets/Docs/Detailed-logs.png) + +Two additional output panes, `Cody Agent` and `Cody Notifications`, will be created with more detailed logs. More information on how to configure them is available [here](https://github.com/sourcegraph/cody-vs/blob/main/CONTRIBUTING.md#developer-configuration-file). + +![Cody output panes](https://storage.googleapis.com/sourcegraph-assets/Docs/Cody-output-panes.png) + ## Regular Expressions ### Asterisks being removed @@ -14147,7 +14658,7 @@ This field is an array of items, each with the following fields: - `${apiVersionId}` specifies the API version, which helps detect compatibility issues between models and Sourcegraph instances. For example, `"2023-06-01"` can indicate that the model uses that version of the Anthropic API. If unsure, you may set this to `"unknown"` when defining custom models - `displayName`: An optional, user-friendly name for the model. If not set, clients should display the `ModelID` part of the `modelRef` instead (not the `modelName`) - `modelName`: A unique identifier the API provider uses to specify which model is being invoked. This is the identifier that the LLM provider recognizes to determine the model you are calling -- `capabilities`: A list of capabilities that the model supports. Supported values: **autocomplete** and **chat** +- `capabilities`: A list of capabilities that the model supports. Supported values: `autocomplete`, `chat`, `vision`, `reasoning`, `edit`, `tools`. - `category`: Specifies the model's category with the following options: - `"balanced"`: Typically the best default choice for most users. This category is suited for models like Sonnet 3.5 (as of October 2024) - `"speed"`: Ideal for low-parameter models that may not suit general-purpose chat but are beneficial for specialized tasks, such as query rewriting @@ -14157,6 +14668,9 @@ This field is an array of items, each with the following fields: - `contextWindow`: An object that defines the **number of tokens** (units of text) that can be sent to the LLM. This setting influences response time and request cost and may vary according to the limits set by each LLM model or provider. It includes two fields: - `maxInputTokens`: Specifies the maximum number of tokens for the contextual data in the prompt (e.g., question, relevant snippets) - `maxOutputTokens`: Specifies the maximum number of tokens allowed in the response +- `reasoningEffort`: Specifies the effort on reasoning for reasoning models (having `reasoning` capability). Supported values: `high`, `medium`, `low`. How this value is treated depends on the specific provider. +For example, for Anthropic models supporting thinking, `low` effort means that the minimum [`thinking.budget_tokens`](https://docs.anthropic.com/en/api/messages#body-thinking) value (1024) will be used. For other `reasoningEffort` values, the `contextWindow.maxOutputTokens / 2` value will be used. +For OpenAI reasoning models, the `reasoningEffort` field value corresponds to the [`reasoning_effort`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-reasoning_effort) request body value. - `serverSideConfig`: Additional configuration for the model. It can be one of the following: - `awsBedrockProvisionedThroughput`: Specifies provisioned throughput settings for AWS Bedrock models with the following fields: @@ -14258,7 +14772,7 @@ In this modelOverrides config example: - The model is configured to use the `"chat"` and `"reasoning"` capabilities - The `reasoningEffort` can be set to 3 different options in the Model Config. These options are `high`, `medium` and `low` - The default `reasoningEffort` is set to `low` -- When the reasoning effort is `low`, 1024 tokens is used as the thinking budget. With `medium` and `high` the thinking budget is set via `max_tokens_to_sample/2` +- For Anthropic models supporting thinking, when the reasoning effort is `low`, 1024 tokens is used as the thinking budget. With `medium` and `high` the thinking budget is set to half of the maxOutputTokens value Refer to the [examples page](/cody/enterprise/model-config-examples) for additional examples. @@ -14537,22 +15051,47 @@ Below are configuration examples for setting up various LLM providers using BYOK ], "modelOverrides": [ { - "modelRef": "anthropic::2024-10-22::claude-3.5-sonnet", - "displayName": "Claude 3.5 Sonnet", - "modelName": "claude-3-5-sonnet-latest", + "modelRef": "anthropic::2024-10-22::claude-3-7-sonnet-latest", + "displayName": "Claude 3.7 Sonnet", + "modelName": "claude-3-7-sonnet-latest", "capabilities": ["chat"], "category": "accuracy", "status": "stable", "contextWindow": { - "maxInputTokens": 45000, - "maxOutputTokens": 4000 - } + "maxInputTokens": 132000, + "maxOutputTokens": 8192 + }, }, + { + "modelRef": "anthropic::2024-10-22::claude-3-7-sonnet-extended-thinking", + "displayName": "Claude 3.7 Sonnet Extended Thinking", + "modelName": "claude-3-7-sonnet-latest", + "capabilities": ["chat", "reasoning"], + "category": "accuracy", + "status": "stable", + "contextWindow": { + "maxInputTokens": 93000, + "maxOutputTokens": 64000 + }, + "reasoningEffort": "low" + }, + { + "modelRef": "anthropic::2024-10-22::claude-3-5-haiku-latest", + "displayName": "Claude 3.5 Haiku", + "modelName": "claude-3-5-haiku-latest", + "capabilities": ["autocomplete", "edit", "chat"], + "category": "speed", + "status": "stable", + "contextWindow": { + "maxInputTokens": 132000, + "maxOutputTokens": 8192 + }, + } ], "defaultModels": { - "chat": "anthropic::2024-10-22::claude-3.5-sonnet", - "fastChat": "anthropic::2023-06-01::claude-3-haiku", - "codeCompletion": "fireworks::v1::deepseek-coder-v2-lite-base" + "chat": "anthropic::2024-10-22::claude-3-7-sonnet-latest", + "fastChat": "anthropic::2024-10-22::claude-3-5-haiku-latest", + "codeCompletion": "anthropic::2024-10-22::claude-3-5-haiku-latest" } } ``` @@ -14561,8 +15100,9 @@ In the configuration above, - Set up a provider override for Anthropic, routing requests for this provider directly to the specified Anthropic endpoint (bypassing Cody Gateway) - Add three Anthropic models: - - Two models with chat capabilities (`"anthropic::2024-10-22::claude-3.5-sonnet"` and `"anthropic::2023-06-01::claude-3-haiku"`), providing options for chat users - - One model with autocomplete capability (`"fireworks::v1::deepseek-coder-v2-lite-base"`) + - `"anthropic::2024-10-22::claude-3-7-sonnet-latest"` with chat, vision, and tools capabilities + - `"anthropic::2024-10-22::claude-3-7-sonnet-extended-thinking"` with chat and reasoning capabilities (note: to enable [Claude's extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) model override should include "reasoning" capability and have "reasoningEffort" defined) + - `"anthropic::2024-10-22::claude-3-5-haiku-latest"` with autocomplete, edit, chat, and tools capabilities - Set the configured models as default models for Cody features in the `"defaultModels"` field @@ -14643,45 +15183,61 @@ In the configuration above, } ], "modelOverrides": [ - { - "modelRef": "openai::2024-02-01::gpt-4o", - "displayName": "GPT-4o", - "modelName": "gpt-4o", - "capabilities": ["chat"], - "category": "accuracy", - "status": "stable", - "contextWindow": { + { + "modelRef": "openai::unknown::gpt-4o", + "displayName": "GPT-4o", + "modelName": "gpt-4o", + "capabilities": ["chat"], + "category": "accuracy", + "status": "stable", + "contextWindow": { "maxInputTokens": 45000, "maxOutputTokens": 4000 + } + }, + { + "modelRef": "openai::unknown::gpt-4.1-nano", + "displayName": "GPT-4.1-nano", + "modelName": "gpt-4.1-nano", + "capabilities": ["edit", "chat", "autocomplete"], + "category": "speed", + "status": "stable", + "tier": "free", + "contextWindow": { + "maxInputTokens": 77000, + "maxOutputTokens": 16000 + } + }, + { + "modelRef": "openai::unknown::o3", + "displayName": "o3", + "modelName": "o3", + "capabilities": ["chat", "reasoning"], + "category": "accuracy", + "status": "stable", + "tier": "pro", + "contextWindow": { + "maxInputTokens": 68000, + "maxOutputTokens": 100000 + }, + "reasoningEffort": "medium" } - }, - { - "modelRef": "openai::unknown::gpt-3.5-turbo-instruct", - "displayName": "GPT-3.5 Turbo Instruct", - "modelName": "gpt-3.5-turbo-instruct", - "capabilities": ["autocomplete"], - "category": "speed", - "status": "stable", - "contextWindow": { - "maxInputTokens": 7000, - "maxOutputTokens": 4000 - } + ], + "defaultModels": { + "chat": "openai::unknown::gpt-4o", + "fastChat": "openai::unknown::gpt-4.1-nano", + "codeCompletion": "openai::unknown::gpt-4.1-nano" } -], - "defaultModels": { - "chat": "openai::2024-02-01::gpt-4o", - "fastChat": "openai::2024-02-01::gpt-4o", - "codeCompletion": "openai::unknown::gpt-3.5-turbo-instruct" - } } ``` In the configuration above, - Set up a provider override for OpenAI, routing requests for this provider directly to the specified OpenAI endpoint (bypassing Cody Gateway) -- Add two OpenAI models: - - `"openai::2024-02-01::gpt-4o"` with "chat" capabilities - used for "chat" and "fastChat" - - `"openai::unknown::gpt-3.5-turbo-instruct"` with "autocomplete" capability - used for "autocomplete" +- Add three OpenAI models: + - `"openai::2024-02-01::gpt-4o"` with chat capability - used as a default model for chat + - `"openai::unknown::gpt-4.1-nano"` with chat, edit and autocomplete capabilities - used as a default model for fast chat and autocomplete + - `"openai::unknown::o3"` with chat and reasoning capabilities - o-series model that supports thinking, can be used for chat (note: to enable thinking, model override should include "reasoning" capability and have "reasoningEffort" defined). @@ -14717,6 +15273,33 @@ In the configuration above, "maxOutputTokens": 4000 } }, + { + "modelRef": "azure-openai::unknown::gpt-4.1-nano", + "displayName": "GPT-4.1-nano", + "modelName": "gpt-4.1-nano", + "capabilities": ["edit", "chat", "autocomplete"], + "category": "speed", + "status": "stable", + "tier": "free", + "contextWindow": { + "maxInputTokens": 77000, + "maxOutputTokens": 16000 + } + }, + { + "modelRef": "azure-openai::unknown::o3-mini", + "displayName": "o3-mini", + "modelName": "o3-mini", + "capabilities": ["chat", "reasoning"], + "category": "accuracy", + "status": "stable", + "tier": "pro", + "contextWindow": { + "maxInputTokens": 68000, + "maxOutputTokens": 100000 + }, + "reasoningEffort": "medium" + }, { "modelRef": "azure-openai::unknown::gpt-35-turbo-instruct-test", "displayName": "GPT-3.5 Turbo Instruct", @@ -14732,8 +15315,8 @@ In the configuration above, ], "defaultModels": { "chat": "azure-openai::unknown::gpt-4o", - "fastChat": "azure-openai::unknown::gpt-4o", - "codeCompletion": "azure-openai::unknown::gpt-35-turbo-instruct-test" + "fastChat": "azure-openai::unknown::gpt-4.1-nano", + "codeCompletion": "azure-openai::unknown::gpt-4.1-nano" } } ``` @@ -14742,9 +15325,11 @@ In the configuration above, - Set up a provider override for Azure OpenAI, routing requests for this provider directly to the specified Azure OpenAI endpoint (bypassing Cody Gateway). **Note:** For Azure OpenAI, ensure that the `modelName` matches the name defined in your Azure portal configuration for the model. -- Add two OpenAI models: - - `"azure-openai::unknown::gpt-4o"` with "chat" capability - used for "chat" and "fastChat" - - `"azure-openai::unknown::gpt-35-turbo-instruct-test"` with "autocomplete" capability - used for "autocomplete" +- Add four OpenAI models: + - `"azure-openai::unknown::gpt-4o"` with chat capability - used as a default model for chat + - `"azure-openai::unknown::gpt-4.1-nano"` with chat, edit and autocomplete capabilities - used as a default model for fast chat and autocomplete + - `"azure-openai::unknown::o3-mini"` with chat and reasoning capabilities - o-series model that supports thinking, can be used for chat (note: to enable thinking, model override should include "reasoning" capability and have "reasoningEffort" defined) + - `"azure-openai::unknown::gpt-35-turbo-instruct-test"` with "autocomplete" capability - included as an alternative model - Since `"azure-openai::unknown::gpt-35-turbo-instruct-test"` is not supported on the newer OpenAI `"v1/chat/completions"` endpoint, we set `"useDeprecatedCompletionsAPI"` to `true` to route requests to the legacy `"v1/completions"` endpoint. This setting is unnecessary if you are using a model supported on the `"v1/chat/completions"` endpoint. @@ -14903,48 +15488,63 @@ In the configuration above, ], "modelOverrides": [ { - "modelRef": "google::unknown::claude-3-5-sonnet", - "displayName": "Claude 3.5 Sonnet (via Google/Vertex)", - "modelName": "claude-3-5-sonnet@20240620", - "contextWindow": { - "maxInputTokens": 45000, - "maxOutputTokens": 4000 - }, - "capabilities": ["chat"], - "category": "accuracy", - "status": "stable" + "modelRef": "google::20250219::claude-3-7-sonnet", + "displayName": "Claude 3.7 Sonnet", + "modelName": "claude-3-7-sonnet@20250219", + "capabilities": ["chat", "vision", "tools"], + "category": "accuracy", + "status": "stable", + "contextWindow": { + "maxInputTokens": 132000, + "maxOutputTokens": 8192 + } }, { - "modelRef": "google::unknown::claude-3-haiku", - "displayName": "Claude 3 Haiku", - "modelName": "claude-3-haiku@20240307", - "capabilities": ["autocomplete", "chat"], - "category": "speed", - "status": "stable", - "contextWindow": { - "maxInputTokens": 7000, - "maxOutputTokens": 4000 - } + "modelRef": "google::20250219::claude-3-7-sonnet-extended-thinking", + "displayName": "Claude 3.7 Sonnet Extended Thinking", + "modelName": "claude-3-7-sonnet@20250219", + "capabilities": ["chat", "reasoning"], + "category": "accuracy", + "status": "stable", + "reasoningEffort": "medium", + "contextWindow": { + "maxInputTokens": 93000, + "maxOutputTokens": 64000 + } }, - ], - "defaultModels": { - "chat": "google::unknown::claude-3-5-sonnet", - "fastChat": "google::unknown::claude-3-5-sonnet", - "codeCompletion": "google::unknown::claude-3-haiku" - } + { + "modelRef": "google::20250219::claude-3-5-haiku", + "displayName": "Claude 3.5 Haiku", + "modelName": "claude-3-5-haiku@20241022", + "capabilities": ["autocomplete", "edit", "chat", "tools"], + "category": "speed", + "status": "stable", + "contextWindow": { + "maxInputTokens": 132000, + "maxOutputTokens": 8192 + } + } + ], + "defaultModels": { + "chat": "google::20250219::claude-3.5-sonnet", + "fastChat": "google::20250219::claude-3-5-haiku", + "codeCompletion": "google::20250219::claude-3-5-haiku" + } } ``` In the configuration above, - Set up a provider override for Google Anthropic, routing requests for this provider directly to the specified endpoint (bypassing Cody Gateway) -- Add two Anthropic models: - - `"google::unknown::claude-3-5-sonnet"` with "chat" capabiity - used for "chat" and "fastChat" - - `"google::unknown::claude-3-haiku"` with "autocomplete" capability - used for "autocomplete" +- Add three Anthropic models: + - `"google::unknown::claude-3-7-sonnet"` with chat, vision, and tools capabilities + - `"google::unknown::claude-3-7-sonnet-extended-thinking"` with chat and reasoning capabilities (note: to enable [Claude's extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) model override should include "reasoning" capability and have "reasoningEffort" defined) + - `"google::unknown::claude-3-5-haiku"` with autocomplete, edit, chat, and tools capabilities +- Set the configured models as default models for Cody features in the `"defaultModels"` field - + ```json "modelConfiguration": { @@ -14963,7 +15563,7 @@ In the configuration above, "modelOverrides": [ { "modelRef": "google::unknown::claude-3-5-sonnet", - "displayName": "Claude 3.5 Sonnet (via Google/Vertex)", + "displayName": "Claude 3.5 Sonnet (via Google Vertex)", "modelName": "claude-3-5-sonnet@20240620", "contextWindow": { "maxInputTokens": 45000, @@ -15013,7 +15613,7 @@ In the configuration above, "displayName": "Anthropic models through AWS Bedrock", "serverSideConfig": { "type": "awsBedrock", - "accessToken": ":", "endpoint": "", "region": "us-west-2" } @@ -15021,16 +15621,16 @@ In the configuration above, ], "modelOverrides": [ { - "modelRef": "aws-bedrock::2024-02-29::claude-3-sonnet", - "displayName": "Claude 3 Sonnet", - "modelName": "claude-3-sonnet", + "modelRef": "aws-bedrock::2025-02-19::claude-3-7-sonnet", + "displayName": "Claude 3.7 Sonnet", + "modelName": "anthropic.claude-3-7-sonnet-20250219-v1:0", "serverSideConfig": { "type": "awsBedrockProvisionedThroughput", "arn": "" // e.g., arn:aws:bedrock:us-west-2:537452198621:provisioned-model/57z3lgkt1cx2 }, "contextWindow": { - "maxInputTokens": 16000, - "maxOutputTokens": 4000 + "maxInputTokens": 132000, + "maxOutputTokens": 8192 }, "capabilities": ["chat", "autocomplete"], "category": "balanced", @@ -15038,9 +15638,9 @@ In the configuration above, }, ], "defaultModels": { - "chat": "aws-bedrock::2024-02-29::claude-3-sonnet", - "codeCompletion": "aws-bedrock::2024-02-29::claude-3-sonnet", - "fastChat": "aws-bedrock::2024-02-29::claude-3-sonnet" + "chat": "aws-bedrock::2025-02-19::claude-3-7-sonnet", + "codeCompletion": "aws-bedrock::2025-02-19::claude-3-7-sonnet", + "fastChat": "aws-bedrock::2025-02-19::claude-3-7-sonnet" }, } ``` @@ -15049,7 +15649,7 @@ In the configuration described above, - Set up a provider override for Amazon Bedrock, routing requests for this provider directly to the specified endpoint, bypassing Cody Gateway - Add the `"aws-bedrock::2024-02-29::claude-3-sonnet"` model, which is used for all Cody features. We do not add other models for simplicity, as adding multiple models is already covered in the examples above -- Note: Since the model in the example uses provisioned throughput, specify the ARN in the `serverSideConfig.arn` field of the model override. +- Since the model in the example uses [Amazon Bedrock provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), specify the ARN in the `serverSideConfig.arn` field of the model override. Provider override `serverSideConfig` fields: @@ -15058,18 +15658,24 @@ Provider override `serverSideConfig` fields: | `type` | Must be `"awsBedrock"`. | | `accessToken` | Leave empty to rely on instance role bindings or other AWS configurations in the frontend service. Use `:` for direct credential configuration, or `::` if a session token is also required. | | `endpoint` | For pay-as-you-go, set it to an AWS region code (e.g., `us-west-2`) when using a public Amazon Bedrock endpoint. For provisioned throughput, set it to the provisioned VPC endpoint for the bedrock-runtime API (e.g., `https://vpce-0a10b2345cd67e89f-abc0defg.bedrock-runtime.us-west-2.vpce.amazonaws.com`). | -| `region` | The region to use when configuring API clients. This is necessary because the 'frontend' binary container cannot access environment variables from the host OS. | +| `region` | The region to use when configuring API clients. The `AWS_REGION` Environment variable must also be configured in the `sourcegraph-frontend` container to match. | Provisioned throughput for Amazon Bedrock models can be configured using the `"awsBedrockProvisionedThroughput"` server-side configuration type. Refer to the [Model Overrides](/cody/enterprise/model-configuration#model-overrides) section for more details. + + If using [IAM roles for EC2 / instance role binding](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), + you may need to increase the [HttpPutResponseHopLimit +](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_InstanceMetadataOptionsRequest.html#:~:text=HttpPutResponseHopLimit) instance metadata option to a higher value (e.g., 2) to ensure that the metadata service can be accessed from the frontend container running in the EC2 instance. See [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-existing-instances.html) for instructions. + + We only recommend configuring AWS Bedrock to use an accessToken for authentication. Specifying no accessToken (e.g. to use [IAM roles for EC2 / instance role binding](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)) - is not currently recommended (there is a known performance bug with this - method which will prevent autocomplete from working correctly. (internal - issue: PRIME-662) + is not currently recommended. There is a known performance bug with this + method which will prevent autocomplete from working correctly (internal + issue: CORE-819) @@ -17180,12 +17786,19 @@ Cody supports a variety of cutting-edge large language models for use in chat an | OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) | ✅ | ✅ | ✅ | | | | | | OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - | ✅ | | | | | | OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - | ✅ | ✅ | | | | | +| OpenAI | [o3](https://platform.openai.com/docs/models#o3) | - | ✅ | ✅ | | | | | +| OpenAI | [o4-mini](https://platform.openai.com/docs/models/o4-mini) | ✅ | ✅ | ✅ | | | | | +| OpenAI | [GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1) | - | ✅ | ✅ | | | | | +| OpenAI | [GPT-4.1-mini](https://platform.openai.com/docs/models/gpt-4o-mini) | ✅ | ✅ | ✅ | | | | | +| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ | ✅ | ✅ | | | | | | Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | | | Anthropic | [Claude 3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | | | Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | ✅ | ✅ | | | | | | Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (beta) | | | | | | Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | | -| Google | [Gemini 2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) | ✅ | ✅ | ✅ | | | | | +| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | | +| Google | [Gemini 2.5 Pro Preview](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro) | - | ✅ | ✅ | | | | | +| Google | [Gemini 2.5 Flash Preview](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash) (experimental) | ✅ | ✅ | ✅ | | | | | To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version. @@ -17205,17 +17818,24 @@ In addition, Sourcegraph Enterprise customers using GCP Vertex (Google Cloud Pla Cody uses a set of models for autocomplete which are suited for the low latency use case. -| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | | -| :-------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- | -| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | ✅ | ✅ | ✅ | | | | | -| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - | ✅ | | | | | -| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | | -| | | | | | | | | | +| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | | +| :----------- | :---------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- | +| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | ✅ | ✅ | ✅ | | | | | +| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | | +| | | | | | | | | | The default autocomplete model for Cody Free, Pro and Enterprise users is DeepSeek-Coder-V2. The DeepSeek model used by Sourcegraph is hosted by Fireworks.ai, and is hosted as a single-tenant service in a US-based data center. For more information see our [Cody FAQ](https://sourcegraph.com/docs/cody/faq#is-any-of-my-data-sent-to-deepseek). +## Smart Apply + +| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | | | | +| :----------- | :------------- | :------- | :------ | :------------- | --- | --- | --- | --- | --- | --- | +| Fireworks.ai | Qwen 2.5 Coder | ✅ | ✅ | ✅ | | | | | | | + +Enterprise users not using Cody Gateway get a Claude Sonnet-based model for Smart Apply. + @@ -17983,6 +18603,12 @@ Smart Apply also supports the executing of commands in the terminal. When you as ![smart-apply](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-2025.png) +### Model used for Smart Apply + +To ensure low latency, Cody uses a more targeted Qwen 2.5 Coder model for Smart Apply. This model improves the responsiveness of the Smart Apply feature in both VS Code and JetBrains while preserving edit quality. Users on Cody Free, Pro, Enterprise Starter, and Enterprise plans get this default Qwen 2.5 Coder model for Smart Apply suggestions. + +Enterprise users not using Cody Gateway get a Claude Sonnet-based model for Smart Apply. + ## Chat history Cody keeps a history of your chat sessions. You can view it by clicking the **History** button in the chat panel. You can **Export** it to a JSON file for later use or click the **Delete all** button to clear the chat history. @@ -20757,6 +21383,15 @@ Here is an example of search results with personalized search ranking enabled: As you can see, the results are now ranked based on their relevance to the query, and the results from repositories you've recently contributed to are boosted. +## Compare changes across revisions + +When you run a search, you can compare the results from two different revisions of the codebase. From your search query results page, click the three-dot **...** icon next to the **Contributors** tab. Then select the **Compare** option. + +From here, you can execute file and directory filtering and compare large diffs, making it easier to navigate and manage. + +This file picker is useful when comparing branches with thousands of changed files and allows you to select specific files or directories to focus on. You can [filter files directly](/code-search/compare-file-filtering) by constructing a URL with multiple file paths or use a compressed file list for even larger selections. + +![file-and-directory-filtering](https://storage.googleapis.com/sourcegraph-assets/Docs/filter-by-file-dir-on-compare.png) ## Other search tips @@ -20815,6 +21450,128 @@ The [symbol search](/code-search/types/symbol) performance section describes que + +# File Filtering in the Repository Comparison Page + +The repository comparison page provides powerful file filtering capabilities that allow you to focus on specific files in a comparison. The system supports multiple ways to specify which files you want to view when comparing branches, tags, or commits. + +## Query parameter-based file filtering + +The comparison page supports three different query parameters to specify which files to include in the comparison: + +### 1. Individual file paths + +You can specify individual files using either of these parameters: + +- `filePath=path/to/file.js` - Primary parameter for specifying files +- `f=path/to/file.js` - Shorthand alternative + +Multiple files can be included by repeating the parameter: + +```shell +?filePath=src/index.ts&filePath=src/components/Button.tsx +``` + +### 2. Compressed file lists + +For comparisons with a large number of files, the system supports compressed file lists (newline-separated): + +- `compressedFileList=base64EncodedCompressedData` - Efficiently packs many file paths + +This parameter efficiently transmits large file paths using base64-encoded, gzip-compressed data. The compression allows hundreds or thousands of files to be included in a URL without exceeding length limits, which vary depending on the browser, HTTP server, and other services involved, like Cloudflare. + +```typescript +// Behind the scenes, the code decompresses file lists using: +const decodedData = atoburl(compressedFileList) +const compressedData = Uint8Array.from([...decodedData].map(char => char.charCodeAt(0))) +const decompressedData = pako.inflate(compressedData, { to: 'string' }) +``` + +One way to create a list of files for the `compressedFileList` parameter is to use Python's built-in libraries to compress and encode using url-safe base64 encoding (smaller than base64-encoding, then url-encoding). + +```shell +python3 -c "import sys, zlib, base64; sys.stdout.write(base64.urlsafe_b64encode(zlib.compress(sys.stdin.buffer.read())).decode().rstrip('='))" < list.of.files > list.of.files.deflated.b64url +``` + +### 3. Special focus mode + +You can focus on a single file using: + +- `focus=true&filePath=path/to/specific/file.js` - Show only this file in detail view + +## File filtering UI components + +The comparison view provides several UI components to help you filter and navigate files: + +### FileDiffPicker + +The FileDiffPicker component allows you to: + +- Search for files by name or path +- Filter files by type/extension +- Toggle between showing all files or only modified files +- Sort files by different criteria (path, size of change, etc.) + +This component uses a dedicated file metadata query optimized for quick filtering. Results are displayed as you type. Through client-side filtering, the component can efficiently handle repositories with thousands of files. + +### File navigation + +When viewing diffs, you can: + +- Click on any file in the sidebar to switch to that file +- Use keyboard shortcuts to navigate between files +- Toggle between expanded and collapsed views of files +- Show or hide specific changes (additions, deletions, etc.) + +### URL-based filtering + +Any filters you apply through the UI will update the URL with the appropriate query parameters. This means you can: + +1. Share specific filtered views with others +2. Bookmark comparisons with specific file filters +3. Navigate back and forth between different filter states using browser history + +## Implementation details + +The system makes strategic performance trade-offs to provide a smooth user experience: + +```typescript +/* + * For customers with extremely large PRs with thousands of files, + * we fetch all file paths in a single API call to enable client-side filtering. + * + * This eliminates numerous smaller API calls for server-side filtering + * when users search in the FileDiffPicker. While requiring one large + * initial API call, it significantly improves subsequent search performance. + */ +``` + +The file filtering system uses a specialized file metadata query that is faster and lighter than the comprehensive file diffs query used to display actual code changes. + +## Usage examples + +1. View only JavaScript files: + +```bash + ?filePath=src/utils.js&filePath=src/components/App.js +``` + +2. Focus on a single file: + +```bash + ?focus=true&filePath=src/components/Button.tsx + ``` + +3. Use a compressed file list for many files: + +```bash + ?compressedFileList=H4sIAAAAAAAAA2NgYGBg... +``` + +This flexible filtering system allows you to create customized views of repository comparisons, making reviewing changes in large projects easier. + + + # Search Snippets @@ -21093,7 +21850,9 @@ You can also search across multiple contexts at once using the `OR` [boolean ope To organize your search contexts better, you can use a specific context as your default and star any number of contexts. This affects what context is selected when loading Sourcegraph and how the list of contexts is sorted. -### Default context +### Default search context + +#### For users Any authenticated user can use a search context as their default. To set a default, go to the search context management page, open the "..." menu for a context, and click on "Use as default". If the user doesn't have a default, `global` will be used. @@ -21101,6 +21860,25 @@ If a user ever loses access to their default search context (eg. the search cont The default search context is always selected when loading the Sourcegraph webapp. The one exception is when opening a link to a search query that does not contain a `context:` filter, in which case the `global` context will be used. +#### For site admins + +Site admins can set a default search context for all users on the Sourcegraph instance. This helps teams improve onboarding and search quality by focusing searches on the most relevant parts of the codebase rather than the entire indexed set of repositories. + +An admin can set a default search context via: + +- Click the **More** button from the top menu of the Sourcegraph web app +- Next, go to **Search Contexts** +- For the existing context list, click on the **...** menu and select **[Site admin only] Set as default for all users** +- Alternatively, you can create a new context and then set it for all users via the same option + +![admin-level-default-context](https://storage.googleapis.com/sourcegraph-assets/Docs/admin-default-context.png) + +Here are a few considerations: + +- If a user already has a personal default search context set, it will not be overridden +- The admin-set default will apply automatically if a user only uses the global context +- Individual users can see the instance-wide default and override it with their own default if they choose + ### Starred contexts Any authenticated user can star a search context. To star a context, click on the star icon in the search context management page. This will cause the context to appear near the top of their search contexts list. The `global` context cannot be starred. @@ -26173,6 +26951,51 @@ Other tips: + +# src search-jobs + +

`src search-jobs` is a tool that manages search jobs in a Sourcegraph instance.

+ +## Usage + +```bash +'src search-jobs' is a tool that manages search jobs on a Sourcegraph instance. + + Usage: + + src search-jobs command [command options] + + The commands are: + + cancel cancels a search job by ID + create creates a search job + delete deletes a search job by ID + get gets a search job by ID + list lists search jobs + logs fetches logs for a search job by ID + restart restarts a search job by ID + results fetches results for a search job by ID + + Common options for all commands: + -c Select columns to display (e.g., -c id,query,state,username) + -json Output results in JSON format + + Use "src search-jobs [command] -h" for more information about a command. +``` + +## Sub-commands + +* [cancel](search-jobs/cancel) +* [create](search-jobs/create) +* [delete](search-jobs/delete) +* [get](search-jobs/get) +* [list](search-jobs/list) +* [logs](search-jobs/logs) +* [restart](search-jobs/restart) +* [results](search-jobs/results) + +
+ # `src scout` @@ -26290,6 +27113,7 @@ Most commands require that the user first [authenticate](quickstart#connect-to-s * [`repos`](references/repos) * [`scout`](references/scout) * [`search`](references/search) +* [`search-jobs`](references/search-jobs) * [`serve-git`](references/serve-git) * [`snapshot`](references/snapshot) * [`teams`](references/teams) @@ -26803,6 +27627,359 @@ Examples: + +# src search-jobs results + +

`src search-jobs results` is a tool that gets the results of a search job on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs results': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -out string + File path to save the results (optional) + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + Get the results of a search job: + $ src search-jobs results U2VhcmNoSm9iOjY5 + + Save search results to a file: + $ src search-jobs results U2VhcmNoSm9iOjY5 -out results.jsonl + + The results command retrieves the raw search results in JSON Lines format. + Each line contains a single JSON object representing a search result. The data + will be displayed on stdout or written to the file specified with -out. +``` + +
+ + +# src search-jobs restart + +

`src search-jobs restart` is a tool that restarts a search job on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs restart': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + Restart a search job by ID: + + $ src search-jobs restart U2VhcmNoSm9iOjY5 + + Restart a search job and display specific columns: + + $ src search-jobs restart U2VhcmNoSm9iOjY5 -c id,state,query + + Restart a search job and output in JSON format: + + $ src search-jobs restart U2VhcmNoSm9iOjY5 -json + + Available columns are: id, query, state, username, createdat, startedat, finishedat, + url, logurl, total, completed, failed, inprogress +``` + +
+ + +# src search-jobs logs + +

`src search-jobs logs` is a tool that gets the logs of a search job on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs logs': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -out string + File path to save the logs (optional) + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + View the logs of a search job: + $ src search-jobs logs U2VhcmNoSm9iOjY5 + + Save the logs to a file: + $ src search-jobs logs U2VhcmNoSm9iOjY5 -out logs.csv + + The logs command retrieves the raw log data in CSV format. The data will be + displayed on stdout or written to the file specified with -out. +``` + +
+ + +# src search-jobs list + +

`src search-jobs list` is a tool that lists search jobs on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs list': + -asc + Sort search jobs in ascending order + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -limit int + Limit the number of search jobs returned (default 10) + -order-by string + Sort search jobs by a sortable field (QUERY, CREATED_AT, STATE) (default "CREATED_AT") + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + List all search jobs: + + $ src search-jobs list + + List all search jobs in ascending order: + + $ src search-jobs list --asc + + Limit the number of search jobs returned: + + $ src search-jobs list --limit 5 + + Order search jobs by a field (must be one of: QUERY, CREATED_AT, STATE): + + $ src search-jobs list --order-by QUERY + + Select specific columns to display: + + $ src search-jobs list -c id,state,username,createdat + + Output results as JSON: + + $ src search-jobs list -json + + Combine options: + + $ src search-jobs list --limit 10 --order-by STATE --asc -c id,query,state + + Available columns are: id, query, state, username, createdat, startedat, finishedat, + url, logurl, total, completed, failed, inprogress +``` + +
+ + +# src search-jobs get + +

`src search-jobs get` is a tool that gets details of a single search job on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs get': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + Get a search job by ID: + + $ src search-jobs get U2VhcmNoSm9iOjY5 + + Get a search job with specific columns: + + $ src search-jobs get U2VhcmNoSm9iOjY5 -c id,state,username + + Get a search job in JSON format: + + $ src search-jobs get U2VhcmNoSm9iOjY5 -json + + Available columns are: id, query, state, username, createdat, startedat, finishedat, + url, logurl, total, completed, failed, inprogress +``` + +
+ + +# src search-jobs delete + +

`src search-jobs delete` is a tool that deletes a search job on a Sourcegraph instance.

+ +```bash +Usage of 'src search-jobs delete': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + Delete a search job by ID: + + $ src search-jobs delete U2VhcmNoSm9iOjY5 + + Arguments: + The ID of the search job to delete. + + The delete command permanently removes a search job and outputs a confirmation message. +``` + +
+ + +# src search-jobs create + +

`src search-jobs create` is a tool that creates a search job on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs create': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + Create a search job: + + $ src search-jobs create "repo:^github\.com/sourcegraph/sourcegraph$ sort:indexed-desc" + + Create a search job and display specific columns: + + $ src search-jobs create "repo:sourcegraph" -c id,state,username + + Create a search job and output in JSON format: + + $ src search-jobs create "repo:sourcegraph" -json + + Available columns are: id, query, state, username, createdat, startedat, finishedat, + url, logurl, total, completed, failed, inprogress +``` + +
+ + +# src search-jobs cancel + +

`src search-jobs cancel` is a tool that cancels a search job on a Sourcegraph instance.

+ +## Usage + +```bash +Usage of 'src search-jobs cancel': + -c string + Comma-separated list of columns to display. Available: id,query,state,username,createdat,startedat,finishedat,url,logurl,total,completed,failed,inprogress (default "id,username,state,query") + -dump-requests + Log GraphQL requests and responses to stdout + -get-curl + Print the curl command for executing this query and exit (WARNING: includes printing your access token!) + -insecure-skip-verify + Skip validation of TLS certificates against trusted chains + -json + Output results as JSON for programmatic access + -trace + Log the trace ID for requests. See https://docs.sourcegraph.com/admin/observability/tracing + -user-agent-telemetry + Include the operating system and architecture in the User-Agent sent with requests to Sourcegraph (default true) + + Examples: + + Cancel a search job by ID: + + $ src search-jobs cancel U2VhcmNoSm9iOjY5 + + Arguments: + The ID of the search job to cancel. + + The cancel command stops a running search job and outputs a confirmation message. +``` + +
+ # `src repos update-metadata` @@ -29762,7 +30939,11 @@ Start by going to the workspace that failed. Then, you get an overview of all th

Learn how to track your existing changesets.

-Batch Changes allow you not only to [publish changesets](/batch-changes/publishing-changesets) but also to **import and track changesets** that already exist on different code hosts. That allows you to get an overview of the status of multiple changesets, with the ability to filter and drill down into the details of a specific changeset. +Batch Changes allow you not only to [publish changesets](/batch-changes/publishing-changesets) but also to **import and track changesets** that already exist on different code hosts. That allows you to get an overview of the status of multiple changesets, with the ability to filter and drill down into the details of a specific changeset. After you have successfully imported changesets, you can perform the following bulk actions: + +- Write comments on each of the imported changesets +- Merge each of the imported changesets to main +- Close each of the imported changesets ![tracking_existing_changesets_overview](https://sourcegraphstatic.com/docs/images/batch_changes/2024/tracking_existing_changesets_overview.png) @@ -38612,14 +39793,18 @@ For upgrade procedures or general info about sourcegraph versioning see the link - [General Upgrade Info](/admin/updates) - [Technical changelog](/technical-changelog) -> ***Attention:** These notes may contain relevant information about the infrastructure update such as resource requirement changes or versions of depencies (Docker, kubernetes, externalized databases).* +> ***Attention:** These notes may contain relevant information about the infrastructure update such as resource requirement changes or versions of dependencies (Docker, kubernetes, externalized databases).* > > ***If the notes indicate a patch release exists, target the highest one.*** +## v6.4.0 + +- The repo-updater service is no longer needed and will be removed from deployment methods going forward. + ## v6.0.0 - Sourcegraph 6.0.0 no longer supports PostgreSQL 12, admins must upgrade to PostgreSQL 16. See our [postgres 12 end of life](/admin/postgres12_end_of_life_notice) notice! As well as [supporting documentation](/admin/postgres) and advisements on how to upgrade. -- The Kuberentes Helm deployment type does not support MVU from Sourcegraph `v5.9.45` versions and earlier to Sourcegraph `v6.0.0`. Admins seeking to upgrade to Sourcegraph `v6.0.0` should upgrade to `v5.11.6271` then use the standard upgrade procedure to get to `v6.0.0`. This is because migrator v6.0.0 will no longer connect to Postgres 12 databases. For more info see our [PostgreSQL upgrade docs](/admin/postgres#requirements). +- The Kubernetes Helm deployment type does not support MVU from Sourcegraph `v5.9.45` versions and earlier to Sourcegraph `v6.0.0`. Admins seeking to upgrade to Sourcegraph `v6.0.0` should upgrade to `v5.11.6271` then use the standard upgrade procedure to get to `v6.0.0`. This is because migrator v6.0.0 will no longer connect to Postgres 12 databases. For more info see our [PostgreSQL upgrade docs](/admin/postgres#requirements). ## v5.9.0 ➔ v5.10.1164 @@ -43052,7 +44237,8 @@ Learn more about how to apply these environment variables in [docker-compose](/a ## Log format -A Sourcegraph service's log output format is configured via the environment variable `SRC_LOG_FORMAT`. The valid values are: +A Sourcegraph service's log output format is configured via the environment variable `SRC_LOG_FORMAT`. This design facilitates integration with external log aggregation systems and SIEM tools for centralized analysis, monitoring, and alerting. +The valid values are: * `condensed`: Optimized for human readability. * `json`: Machine-readable JSON format. @@ -43105,7 +44291,7 @@ Note that this will also affect child scopes. So in the example you will also re ## Log sampling -Sourcegraph services that have migrated to the [new internal logging standard](/dev/how-to/add_logging) have log sampling enabled by default. +Sourcegraph services have log sampling enabled by default. The first 100 identical log entries per second will always be output, but thereafter only every 100th identical message will be output. This behaviour can be configured for each service using the following environment variables: @@ -43155,7 +44341,7 @@ Sourcegraph is designed, and ships with, a number of observability tools and cap ## Support -For help configuring observability on your Sourcegraph instance, use our [public issue tracker](https://github.com/sourcegraph/issues/issues). +For help configuring observability on your Sourcegraph instance, contact the support team at support@sourcegraph.com.
@@ -90131,23 +91317,6 @@ Open the repository's `Settings` page on Sourcegraph and from the `Mirroring` ta ![Reclone repository](https://storage.googleapis.com/sourcegraph-assets/docs/images/admin/how-to/reclone-repo.png) -## Manually purge deleted repository data from disk - -After a repository is deleted from Sourcegraph in the database, its data still remains on disk on gitserver so that in the event the repository is added again it doesn't need to be recloned. These repos are automatically removed when disk space is low on gitserver. However, it is possible to manually trigger removal of deleted repos in the following way: - -**NOTE:** This is not available on Docker Compose deployments. - -1. Browse to Site Admin -> Instrumentation -> Repo Updater -> Manual Repo Purge -2. You'll be at a url similar to `https://sourcegraph-instance/-/debug/proxies/repo-updater/manual-purge` -3. You need to specify a limit parameter which specifies the upper limit to the number of repos that will be removed, for example: `https://sourcegraph-instance/-/debug/proxies/repo-updater/manual-purge?limit=1000` -4. This will trigger a background process to delete up to `limit` repos, rate limited at 1 per second. - -It's possible to see the number of repos that can be cleaned up on disk in Grafana using this query: - -``` -max(src_repoupdater_purgeable_repos) -``` -