-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APM] Progressive fetching (experimental) #127598
Conversation
💔 Build FailedFailed CI Steps
Test Failures
Metrics [docs]Module Count
Public APIs missing comments
Async chunks
Page load bundle
Unknown metric groupsAPI count
ESLint disabled in files
ESLint disabled line counts
Total ESLint disabled count
HistoryTo update your PR or re-run it, just comment with: |
random_sampler: { | ||
probability, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be seeded by session. Otherwise, on every refresh, the initial data fetched could be different. Seeding allows for consistency between page refreshes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aye, any idea what that should/can be? can it be something like "apm-app"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Integer number only. So, an integer hash of a string is ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dgieselaar just confirming that for anything that is a visualization, the progressive fetching is seeded for a user's session :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that this is handled in Elasticsearch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dgieselaar no, it is not. Elasticsearch doesn't know about kibana user's sessions.
If the random_sampler is used for visualizations, it should be seeded. Otherwise a different subset of the data is used on every search call, which would be jarring as the visualization will subtly jump around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dang, too bad, because I forgot about it. I'll create a follow-up issue. I don't think this is required for an experimental feature though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -38,7 +38,7 @@ import { eventMetadataRouteRepository } from '../event_metadata/route'; | |||
import { suggestionsRouteRepository } from '../suggestions/route'; | |||
import { agentKeysRouteRepository } from '../agent_keys/route'; | |||
|
|||
const getTypedGlobalApmServerRouteRepository = () => { | |||
function getTypedGlobalApmServerRouteRepository() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why you changed this but thank you!! :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that makes two of us!
This is a really exciting idea and I think that multiple teams will greatly benefit from this. If this POC is successful, we could impement this as a search strategy where the random response is emmitted with The search service API already supports emmitting multiple responses and it would be up to the consumer to decide whether they want to render the partial result or wait for the final response to render. |
@@ -10,4 +10,5 @@ export const maxSuggestions = 'observability:maxSuggestions'; | |||
export const enableComparisonByDefault = 'observability:enableComparisonByDefault'; | |||
export const enableInfrastructureView = 'observability:enableInfrastructureView'; | |||
export const defaultApmServiceEnvironment = 'observability:apmDefaultServiceEnvironment'; | |||
export const enableRandomSampling = 'observability:enableRandomSampling'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to add an apm
prefix as discussed before for other settings
We pulled this from 8.2 after running out of time. We ran into various things, all related to the random_sampler aggregation being a relatively new feature:
|
services: { | ||
terms: { | ||
field: SERVICE_NAME, | ||
sampled: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for the sake of consistency can we use either sample
or sampled
? 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done! changed sampled
to sample
.
this is super exciting 🥳 it might not be in the scope of the current PR but I've noticed that the The downside is that we won't see a performance improvement for rendering the sparklines, it might be even a bit slower because it depends on 2 requests. @dgieselaar do you think is possible to decouple the dependency? |
There's a possibility of the service names changing after the unsampled request comes in. I'd like to avoid triggering requests instead of two for
There will still be a perforrmance improvement compared to to day - we already block on |
x-pack/plugins/apm/public/components/app/service_inventory/index.tsx
Outdated
Show resolved
Hide resolved
x-pack/plugins/apm/public/components/app/service_inventory/index.tsx
Outdated
Show resolved
Hide resolved
progressiveLoadingQuality | ||
); | ||
|
||
const sampledFetch = useFetcher( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to have some explanation about the difference between sampledFetch
and unsampledFetch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean about the concept of sampling? IMHO the variable names are pretty descriptive as-is, but yes, they require the reader to understand what sampling is. But explaining that is going to be a long comment here 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or at least something that says when we should use useProgressiveFetcher
over useFetcher
.
@cauemarcondes when you use "request changes", can you be explicit about the changes you're requesting? IMHO it should be reserved for blockers and I don't really see any. |
|
||
const unsampledFetch = useFetcher( | ||
(regularCallApmApi) => { | ||
return callback(clientWithProbability(regularCallApmApi, 1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be clearer if you use ProgressiveLoadingQuality
here instead of 1
?
return callback(clientWithProbability(regularCallApmApi, 1)); | |
return callback(clientWithProbability(regularCallApmApi, ProgressiveLoadingQuality.off)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM very nice! 👏🏻
Yeah I shouldn't have selected "request changes" my bad. |
…a into use-progressive-fetcher
💚 Build SucceededMetrics [docs]Module Count
Public APIs missing comments
Async chunks
Page load bundle
Unknown metric groupsAPI count
ESLint disabled line counts
Total ESLint disabled count
History
To update your PR or re-run it, just comment with: |
⚪ Backport skippedThe pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually. Manual backportTo create the backport manually run:
Questions ?Please refer to the Backport tool documentation |
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
This implements progressive fetching for the API endpoints used for the service inventory and the trace inventory. Here's how it works:
Closes #126593.