[v10.1.x] Loki: Cache extracted labels #75905
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 5b63cdb from #75842
What is this feature?
Adds simple Map to cache the previous results of Loki data samples in the Loki language provider.
Why do we need this feature?
loki-data-samples queries are currently triggered directly from the Monaco completion callbacks, and not part of the react/UI application layer, so certain keystrokes in certain contexts will trigger another query. When this query takes a long time to complete, the editor UX is very sluggish and difficult to work with. Since the values returned by this query are extracted label values, which are not expected to change second-to-second, the solution proposed here is to cache requests and serve "stale" labels to prevent needing to wait for an api request whenever the user presses spacebar or comma inside a stream selector.
TL;DR;
To prevent duplicate API calls getting labels in autocomplete, we cache 2 unique query strings and their associated labels.
Who is this feature for?
Users of Loki code editor.
Which issue(s) does this PR fix?:
Fixes #75512
Special notes for your reviewer:
This is the lightest implementation I could imagine in terms of CPU/memory consumption, worried about cases where this list of labels is quite long.
We could use LRU for the added feature of smarter cache purging (instead of purging the first inserted, we'd be purging the least recently used), but that's a lot of additional overhead, and the API between map and LRU are very similar, so swapping it out would be very easy.
Please check that: