-
Notifications
You must be signed in to change notification settings - Fork 572
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: benchmarking code improvements #630
Conversation
size-limit report 📦
|
Codecov Report
@@ Coverage Diff @@
## main #630 +/- ##
=======================================
Coverage 75.76% 75.76%
=======================================
Files 45 45
Lines 1588 1588
Branches 292 292
=======================================
Hits 1203 1203
Misses 356 356
Partials 29 29 Continue to review full report at Codecov.
|
Server BenchmarkDetails
Result
Details
Screenshots |
Server Pull BenchmarkDetails
Result
Details
Screenshots |
benchmark/pull/server.yml.example
Outdated
--- | ||
# log-level: debug | ||
scrape-configs: | ||
- job-name: testing | ||
enabled-profiles: [cpu, mem] | ||
static-configs: | ||
- application: pull-target | ||
targets: | ||
- pull-target:4042 | ||
labels: | ||
pod: pod-0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could even create a fake discovery mechanism that would produce targets (with unique scrape url).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that would be useful here, although generating these configs a is also pretty easy.
What do you mean by unique scraping URL? Is it important for them to have different ones?
This is a great help when working on performance improvements! |
* adds benchmarking code * wip * pull benchmarks * adds more info to readme * benchmark improvements * no extra logging * fix * improvements * fix * changes * lint fix * initial version of pr-pull report * fixes * fix * fix * fix * fix * fix * report improvements * fix * fix * wip: unifying benchmark scripts * more benchmark improvements * updates to benchmark * cleanup * fixes * fix
Features: * Adds an Application Selector to enable users to make more targeted queries by default, similar to OG Pyroscope. * Users can still write their own complex queries if desired. * Supports both "pyroscope_app" and "service_name" as indexes. Caveats: * Utilizes the /querier.v1.QuerierService/Series to create a list of apps, which returns more data than necessary. * Only returns data that is currently in memory, specifically recently ingested apps. (Related to [ui] 'ProfileID/Applications' only shows data that has been ingested recently #630) * Parsing an "App" from a non-trivial query (e.g., using !~) does not function correctly. In this PR, it primarily affects the dropdown population, which should not match a query like "cpu{app="myapp.*"}" to an "App" accurately. * Does not preserve tags when switching between apps. For example, if the current query is "cpu{mytag="foo", pyroscope_app="myapp"}" and the user clicks on "myapp2", even if "myapp2" shares the exact tags, the new query will completely remove "mytag="foo"": "cpu{pyroscope_app="myapp2"}". * The "pyroscope_app/service_name" tag is present in the app selector. * There is no filtering mechanism similar to OG Pyroscope.
improves existing benchmarking code, adds pull mode benchmarks.
To run pull benchmarks run:
# 100 is number of clients that are being simulated ./start 100
You should be able to see:
/cc @abeaumont @kolesnikovae you all might be interested in pull mode benchmark