-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new feature flag for situation based latency #1202
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome sauce! Left some early commets.
@philipp-spiess updated based on your feedbacks (thank you!) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good stuff! Do you mind adding a few unit tests for latency.ts
just to assert that it works the way we want it to? :)
vscode/src/completions/latency.ts
Outdated
} | ||
|
||
if (!lastCandidate?.result.logId) { | ||
user = currentUserLatency ? currentUserLatency * 2 : defaultLatency.user |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to set the new value to currentUserLatency
right? Otherwise this will always be 0.
I think we should also make sure this can't grow unbounded so maybe we do
user = currentUserLatency = currentUserLatency ? Math.min(currentUserLatency * 2, defaultLatency.maxLatency) : defaultLatency.user
Co-authored-by: Philipp Spiess <hello@philippspiess.com>
expect(getLatency(provider, languageId)).toBe(1400) | ||
}) | ||
|
||
it('returns increasing latency up to max latency for supported language on non-anthropic provider after rejecting multiple suggestions consistently', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests are pure 🔥
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, great work @abeatrix 🔥🔥🔥
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@philipp-spiess thank you so much for your help, testing is hardddd 😭
@arafatkatze thank you! I'm sorry we haven't had time to review your PRs. Our team is currently busy working on the release next week, so we apologize for being unresponsive lately, but we want to let you know that we appreciate your help and your feedback in the past few weeks, and we should be able to catch up on your works after the release 🙇♀️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@abeatrix No worries at all! I completely understand the crunch you're in with the upcoming release. I'm genuinely impressed by the creativity that goes behind all of this work and I couldn't help myself from taking a look and adding some thoughts. I'll keep exploring the codebase and trying to understand/implement some ideas. Don't feel pressured to review my PRs—I know you folks are busy, and they can totally wait until you have the bandwidth.
Just happy to help in any way I can!!
CLOSE: #1189, #1175, #1176, #1177 feat: new feature flag for situation-based latency Add new feature flag for autocomplete to adjust minimum latency for completions based on env and user actions - Remove separate feature flags for 350ms and 600ms minimum latency - Add single feature flag for adjustable minimum latency - Introduce LatencyManager to adjust latency based on user actions - Apply adjusted minimum latency before showing completions <!-- Required. See https://docs.sourcegraph.com/dev/background-information/testing_principles. --> Expected behavior: 1. every user starts at `200ms` under `user latency` 2. `baseline latency` starts at `400ms` for all non-anthropic provider, `0ms` for anthropic 3. working in low performances files adds additional `1200ms` to the `baseline latency` under `additional latency` 4. when a suggestion is accepted, `baseline latency` reset to starting `baseline latency`, `user latency` reset to 0 5. when a suggestion is not accepted (e.g. typing over ghost type), double the current `user latency` count. 6. typing over the same ghost type counts as one reject only, and will not increase latency again https://github.com/sourcegraph/cody/assets/68532117/9f287dcf-fb60-435b-b37e-f7446fad5cf3 --------- Co-authored-by: Philipp Spiess <hello@philippspiess.com>
CLOSE: #1189, #1175, #1176, #1177
feat: new feature flag for situation-based latency
Add new feature flag for autocomplete to adjust minimum latency for completions based on env and user actions
Test plan
Expected behavior:
200ms
underuser latency
baseline latency
starts at400ms
for all non-anthropic provider,0ms
for anthropic1200ms
to thebaseline latency
underadditional latency
baseline latency
reset to startingbaseline latency
,user latency
reset to 0user latency
count.add.latency.for.throttling.autocompletes.in.unwanted.and.low.performances.environments.mp4