Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: split min latency flags #1351

Merged
merged 8 commits into from
Oct 11, 2023
Merged

refactor: split min latency flags #1351

merged 8 commits into from
Oct 11, 2023

Conversation

abeatrix
Copy link
Contributor

@abeatrix abeatrix commented Oct 9, 2023

RE: https://sourcegraph.slack.com/archives/C05AGQYD528/p1696845420506809

refactor: replace cody autocomplete minimum latency flag with user, provider, and language based latency flags

Replaced the CodyAutocompleteMinimumLatency feature flag with two new flags:

  • CodyAutocompleteUserLatency : Enables minimum latency for Cody autocomplete. Works the same as the old CodyAutocompleteMinimumLatency flag but rename it to make sure the old flag will not be used in older versions.
  • CodyAutocompleteLanguageLatency: Enables language-specific minimum latency for Cody autocomplete for low-performance languages ONLY
  • CodyAutocompleteProviderLatency: Enables provider-specific minimum latency for non-anthropic provider

Expected behavior:

  • latency for comments will be applied if any of the feature flags is enabled
  • latency for user-based actions will only be applied if CodyAutocompleteUserLatency is enabled
  • latency for language-based actions will only be applied if CodyAutocompleteLanguageLatency is enabled
  • latency for provider-based actions will only be applied if CodyAutocompleteProviderLatency is enabled

The getLatency function now accepts feature flags and calculates latency accordingly.

This allows more granular control over autocomplete latency based on user, provider, and language.

Test plan

Unit tests are updated.

@abeatrix abeatrix marked this pull request as ready for review October 9, 2023 21:55
Copy link
Contributor

@dominiccooney dominiccooney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments and questions inline.

vscode/src/completions/latency.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.test.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.test.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.test.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.test.ts Outdated Show resolved Hide resolved
Copy link
Contributor

@philipp-spiess philipp-spiess left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make more sense to have two feature flags that turn on their respective features individually instead of having this dependency where cody-autocomplete-lang-latency requires cody-autocomplete-user-latency to be on in order to work?

vscode/src/completions/inline-completion-item-provider.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.test.ts Outdated Show resolved Hide resolved
abeatrix and others added 2 commits October 10, 2023 07:12
Co-authored-by: Philipp Spiess <hello@philippspiess.com>
Co-authored-by: Dominic Cooney <dominic.cooney@sourcegraph.com>
@abeatrix
Copy link
Contributor Author

@dominiccooney @philipp-spiess thanks for taking the time to review the PR. I've made the changes based on your valuable input,added a new feature flag for provider based, and made sure they all work independently.

Copy link
Contributor

@dominiccooney dominiccooney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work. Some suggestions inline.

vscode/src/completions/inline-completion-item-provider.ts Outdated Show resolved Hide resolved
vscode/src/completions/inline-completion-item-provider.ts Outdated Show resolved Hide resolved
vscode/src/completions/latency.test.ts Show resolved Hide resolved
vscode/src/completions/latency.test.ts Show resolved Hide resolved
@abeatrix abeatrix merged commit 5680c24 into main Oct 11, 2023
13 checks passed
@abeatrix abeatrix deleted the bee/latency-split branch October 11, 2023 15:04
@abeatrix abeatrix self-assigned this Oct 11, 2023
abeatrix added a commit that referenced this pull request Oct 16, 2023
RE: #1351

fix: apply min latency only when last suggestion was read

The minimum latency for code suggestions is now only applied when the
last suggestion was read by the user. This avoids applying the latency
when the user is actively typing and has not read the previous
suggestions.

The lastCompletionRequestTimestamp property tracks when the last
suggestion request was made. The latency is skipped if the time since
last request is less than 750ms.


## Test plan

<!-- Required. See
https://docs.sourcegraph.com/dev/background-information/testing_principles.
-->

Check the output channel. You should not see latency getting increased
after 5 keystrokes, which is the current behavior.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants