-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bext: update release branch #57266
Merged
Merged
bext: update release branch #57266
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Member
Relates to #sourcegraph/zoekt/646 Test plan: just a Changelog update Co-authored-by: Thorsten Ball <mrnugget@gmail.com>
* Make `sg wolfi update-hashes` work on single images * User friendly tweak: oci_deps uses `_` in image names rather than `-` which are used everywhere else, so autoreplace them * sg generate
…56730) * Connect view logs and download results search job UI * Update bazel configuration * Adjust order argument * change routes * Respect URL file name * Use page like (bi-directional) pagination
* add onboarding tour back to homepage * user-onboarding: Add back (simplified) user onboarding tour This commit adds back the existing onboarding tour with a couple of changes: - The tour is only shown - for authenticated users - on non-dotcom instances - when the `end-user-onboarding` feature flag is set - The cody CTA is only shown when the onboarding tour is not shown. In a future PR the content of the tour will be updated to include cody specific steps. * Update other tour callsites * Update stories --------- Co-authored-by: Coury Clark <coury@sourcegraph.com>
This PR introduces a new "search query" step type to the onboarding tour. This type takes a query template and a list of code snippets. When the step is rendered it will make an "exploratory" query for each code snippet to find one that returns results. That query is then used in the onboarding tour. The query template currently supports four variables: $$userorg, $$userrepo, $$userlangand$$snippet`. The PR also removes the old logic for determining the user's language, which will be supplied by the user in a different way.
I am no longer working primarily on search backend work, and the notifications are getting a bit overwhelming. As always, feel free to ping me directly for a review
searchJobResolver.State now returns a state based on the aggregate state of all (sub-)jobs related to the job. Now, only if all (sub-)jobs completed successfuly will the job report the state "Completed". Previously, we reported "Complete" once the top-level finsihed, even if repo-rev jobs were still running. Test plan: - Updated unit test - Manual testing ```` mutation create { createSearchJob(query: "context:global 1@rev1 2@rev5 3@rev6") { id } } query state { node(id: "U2VhcmNoSm9iOjEwNQ==") { ... on SearchJob { repoStats { inProgress failed completed total } state } } } { "data": { "node": { "repoStats": { "inProgress": 0, "failed": 0, "completed": 7, "total": 7 }, "state": "COMPLETED" } } } ````
Fixes #56783 The accuracy of the cursor was less than the accuracy of the dates in the db. The reounding errors caused the pagination to behave oddly.
…rience (#56768) This PR integrates the onboarding tour with the admin configuration, i.e. the onboarding tour now uses the content configuration from the server. Additionally I made some changes to the admin page to improve UX: - Show a simple of preview of the onboarding tour, which live updates as the config is edited - Added separate schema validation (outside of Monaco) to - ensure that the preview always gets a valid config - ensure that the user cannot save an invalid schema (and break the tour for other users) - make errors in the config more visible - Added a "Reset" button to restore the original (hardcoded) onboarding tour so that the user can always go back to a valid config. It also fixes some issues overlooked in the previous PR (such as schema issues). Note for the schema validation: I also could have added validation logic when loading the config for the real tour, but that would add a couple of dependencies to the "normal" pages that shouldn't be necessary. Preventing an invalid config from getting in in the first place seems better. Note on the preview: In order to make the preview work I moved some logic into the tour context, do that those values can be overwritten easily for the preview. Specifically this affects the dynamic query generation, which we don't want to do for the preview.
* fill out `CheckConnection` and mark Perforce as able to check connections. * Refactor the connection test to not rely on IsCloneable Because connecting to Perforce requires using the `p4` CLI binary, which is on `gitserver`, we're using the Perforce VCS Syncer to do the connection check. Add a method to the VCS syncer for Perforce specifically for checking the connection instead of hacking a solution that re-uses IsCloneable, which requires a valid depot. Adding another func won't break composition with VCSSyncer. * update comment on CheckConnection * add a 10-second timeout for the connection test 10 seconds is somewhat arbitrary, but seems like enough time to allow for various connection issues, whle not being too long for the frontend. * Change from directly using the syncer to making an RPC call to gitserver because the call to CheckConnection comes fro mthe frontend, where `p4` is not available.
…to shorten it This is rather finicky - we need to be careful not to change the defaults or existing projects ## Test plan sourcegraph/managed-services#7
This updates variable names, property names, env var names, etc., to call it "Cody App". The entire diff was created by running the following commands: ``` fastmod -e go SourcegraphAppMode CodyAppMode fastmod -e go,ts,tsx sourcegraphAppMode codyAppMode fastmod -e ts,tsx isSourcegraphApp isCodyApp fastmod -e ts,tsx,go,yaml,sh,js SOURCEGRAPH_APP CODY_APP fastmod -e ts,tsx,go,json,mod,graphql,md,js 'Sourcegraph App\b' 'Cody App' fastmod -e ts,tsx,go,json,mod,graphql,md,js 'Sourcegraph app\b' 'Cody app' # with a few changes skipped ```
Part of #56286 . Stubs the GraphQL mutations that will allow clients to report data to our new telemetry events system. It's similar to the gRPC API we are going to expose for ingesting this data on our end: #56519, except excluding fields that we can (and should) be hydrating serverside instead. See stack for the rest of the implementation: #56297 (comment) ## Test plan ``` sg start ``` ![image](https://github.com/sourcegraph/sourcegraph/assets/23356519/f2a182a4-4cee-4ad8-9a39-d11b7cfcc304)
Lays out the gRPC API proposed in https://docs.google.com/document/d/14WBt80sbmVm73B-R1Srs5cSunZo2IhDMtKiyAgkYujU/edit#bookmark=id.gm97tmnvqp2t for Telemetry Gateway, which will receive all events emitted from Sourcegraph instances. It's a superset of the client SDK (https://github.com/sourcegraph/telemetry) and the backend SDK (#56520), relying on the Sourcegraph instance to opaquely hydrate a lot of the fields that we used to require clients to send end-to-end. Only `internal/telemetrygateway/v1/telemetrygateway.proto` requires review - everything else is generated code. See stack for rest of implementation: #56519 (comment) Stacked on #56297 Part of #56287 ## Test plan n/a - API spec only.
This PR introduces: 1. a new backend telemetry API, with `internal/telemetry.EventRecorder`, for backend services to generate their own events, based on [the proposal](https://docs.google.com/document/d/14WBt80sbmVm73B-R1Srs5cSunZo2IhDMtKiyAgkYujU/edit#bookmark=id.wy4lr5f3lxyk) - this is similar to the clientside SDK: https://github.com/sourcegraph/telemetry 3. a caching layer in the database, `internal/database.TelemetryEventsExportQueueStore`, that stores raw Protobuf messages so that a worker can be implemented on top to pull events out and export them in batches - planned strategy is to mark exported entries as exported, and then periodically prune exported events from the table after N day(s) 4. an `internal/telemetry/teestore.Store` which tees events into the existing `event_logs` table (which we are no longer considering revamping due to extensive existing integrations with it) as well as the new `TelemetryEventsExportQueueStore` - adding things to this store is behind an off-by-default feature flag `telemetry-export` - when the flag is disabled, adding to the store is a no-op 5. adapters to send events from `telemetry.EventRecorder` and the GraphQL mutation added in #56297 to the `teestore.Store` Actually exporting things, and the destination service itself, will be built in #56699 Stacked on #56519 Closes #56283 Closes #56285 Closes #56286 ## Test plan Unit and integration tests, and manual test plan in #56699
This change adds: - telemetry export background jobs: flagged behind `TELEMETRY_GATEWAY_EXPORTER_EXPORT_ADDR`, default empty => disabled - telemetry redaction: configured in package `internal/telemetry/sensitivemetadataallowlist` - telemetry-gateway service receiving events and forwarding it to a pub/sub topic (or just logging it, as configured in local dev) - utilities for easily creating an event recorder: `internal/telemetry/telemetryrecorder` Notes: - all changes are feature-flagged to some degree, off by default, so the merge should be fairly low-risk. - we decided that transmitting the full license key continues to be the way to go. we transmit it once per stream and attach it on all events in the telemetry-gateway. there is no auth mechanism at the moment - GraphQL return type `EventLog.Source` is now a plain string instead of string enum. This should not be a breaking change in our clients, but must be made so that our generated V2 events do not break requesting of event logs Stacked on #56520 Closes #56289 Closes #56287 ## Test plan Add an override to make the export super frequent: ``` env: TELEMETRY_GATEWAY_EXPORTER_EXPORT_INTERVAL: "10s" TELEMETRY_GATEWAY_EXPORTER_EXPORTED_EVENTS_RETENTION: "5m" ``` Start sourcegraph: ``` sg start ``` Enable `telemetry-export` featureflag (from #56520) Emit some events in GraphQL: ```gql mutation { telemetry { recordEvents(events:[{ feature:"foobar" action:"view" source:{ client:"WEB" } parameters:{ version:0 } }]) { alwaysNil } } ``` See series of log events: ``` [ worker] INFO worker.telemetrygateway-exporter telemetrygatewayexporter/telemetrygatewayexporter.go:61 Telemetry Gateway export enabled - initializing background routines [ worker] INFO worker.telemetrygateway-exporter telemetrygatewayexporter/exporter.go:99 exporting events {"maxBatchSize": 10000, "count": 1} [telemetry-g...y] INFO telemetry-gateway.pubsub pubsub/topic.go:115 Publish {"TraceId": "7852903434f0d2f647d397ee83b4009b", "SpanId": "8d945234bccf319b", "message": "{\"event\":{\"id\":\"dc96ae84-4ac4-4760-968f-0a0307b8bb3d\",\"timestamp\":\"2023-09-19T01:57:13.590266Z\",\"feature\":\"foobar\", .... ``` Build: ``` export VERSION="insiders" bazel run //cmd/telemetry-gateway:candidate_push --config darwin-docker --stamp --workspace_status_command=./dev/bazel_stamp_vars.sh -- --tag $VERSION --repository us.gcr.io/sourcegraph-dev/telemetry-gateway ``` Deploy: sourcegraph/managed-services#7 Add override: ```yaml env: # Port required. TODO: What's the best way to provide gRPC addresses, such that a # localhost address is also possible? TELEMETRY_GATEWAY_EXPORTER_EXPORT_ADDR: "https://telemetry-gateway.sgdev.org:443" ``` Repeat the above (`sg start` and emit some events): ``` [ worker] INFO worker.telemetrygateway-exporter telemetrygatewayexporter/exporter.go:94 exporting events {"maxBatchSize": 10000, "count": 6} [ worker] INFO worker.telemetrygateway-exporter telemetrygatewayexporter/exporter.go:113 events exported {"maxBatchSize": 10000, "succeeded": 6} [ worker] INFO worker.telemetrygateway-exporter telemetrygatewayexporter/exporter.go:94 exporting events {"maxBatchSize": 10000, "count": 1} [ worker] INFO worker.telemetrygateway-exporter telemetrygatewayexporter/exporter.go:113 events exported {"maxBatchSize": 10000, "succeeded": 1} ```
Fix client cache updates for search jobs mutations
This was used for the deprecated and removed Sourcegraph extension API support.
Updates the defaults based on ballpark estimates originally proposed in https://docs.google.com/document/d/1qZqtacGELLa6LXqZAs7oD7XOSRCEnucUHKB8YUu6XMI/edit.
support single-program execution Now, `sg start single-program` starts a single-binary local dev server. This is similar to Cody app, but instead of using a Tauri desktop app UI and limiting to only Cody-related functionality, it runs a full Sourcegraph instance and lets you access it through your web browser. It is useful for local dev because it's less resource-intensive and has faster recompile/relink times than `sg start` (which runs many processes).
CodeMirror is now used as the editor for query inputs, but there remained some old code used for Monaco query inputs.
This feature flag was removed in 8d7310a 4 months ago, but a few references remained.
This makes it so that the previously on-by-default behavior is now the only behavior. This only affects the old query input, not the new query input. The behavior is confusing to describe, but it basically means that as you type in the old query input, there is no autocomplete item selected until you press the down arrow, so pressing <kbd>Enter</kbd> does not confusingly accept an autocomplete item that you did not intend. This was set to on by default on https://sourcegraph.com 9 months ago at sourcegraph/deploy-sourcegraph-cloud#17546, and has been on by default in all other instances for 1 year.
- Remove "monaco" from the name of components that have nothing to do with Monaco (such as MonacoField). These used to be related to a query input backed by Monaco but have long been backed by CodeMirror. - Fix a few other types, outdated references, etc. There is no behavior change.
- Fix typos and other imperfections (eg in `repo:foo/.*`, the `.*` is unnecessary) - Regroup and re-label the examples - Make diff search examples more prominent - Remove the diff search pre-populated author because it is unlikely to return results (given that diff search is slow and non-deterministic with respect to order) - Make queries clickable as links instead of adding to query input
We regularly run into a problem where a customer with a large repo tries to embed that repo, gets part way, but then flakiness from OpenAI, their network, or whatever else causes the whole embeddings job to fail. This is very frustrating on the side of the customer, and is the source of a lot of delays when getting customers up and running with Cody. We already retry in the HTTP layer, but sometimes this isn't enough, especially since failures seem to be correlated with the content being sent. Instead, this PR allows some requests to fail. We already allow some chunks to fail (which is the most common failure case where openAI just returns a null vector), but this extends that setting to now also include full request failures. To avoid jobs succeeding when there is a configuration error, some persistent connection error, or the like, we allow some chunks to fail, but if more than 10% of the chunks fail to embed, we fail the whole job. This number is arbitrary, but it probably doesn't matter too much as long as it's greater than zero and less than 100.
This checks `ExperimentalFeatures.SearchJobs` before kicking of a search job. Test plan: I unset `experimentalFeatures.searchJobs` locally and verified that the GraphQL API returns an error.
* Update docker dind image to 24.0.6 to fix vulns * Remove no-longer-necessary apk update
Co-authored-by: ggilmore <geoffrey@sourcegraph.com>
Co-authored-by: Joe Chen <joe@sourcegraph.com>
Add JetBrains supported IDEs for Cody
Co-authored-by: Joe Chen <joe@sourcegraph.com>
* switching ctags to tagged versions * Update header comment --------- Co-authored-by: Will Dollman <will.dollman@sourcegraph.com>
Not notifying subscribers because the number of notifying subscribers (24) has exceeded the threshold (10). |
Codenotify: Notifying subscribers in OWNERS files for diff 1c4e1df...5759265.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This updates the
bext/release
branch to point to the latestmain
. Based on the documentation here, it seems that's all that's needed to do a release of the browser extension, and the rest will happen in CI. This doesn't include the Safari extension, but it looks like this hasn't been done since April, so let's see Firefox and Chrome work first.Test plan
Will check that the firefox and chrome extensions successfully get published.
Preview 🤩
Preview Link