-
Notifications
You must be signed in to change notification settings - Fork 42
Description
Categories of Performance Tests
Performance tests can be divided into two main categories:
- Artificial Benchmarks
- Real UMF Use Cases
I intend to begin with Artificial Benchmarks but i'm open to feedback on this approach.
1. Artificial Benchmarks
Objective: Create controlled benchmarks to evaluate UMF configurations under various workloads.
- UMF Configurations: Set up different memory providers, with different memory pools and without.
- Workloads: Specify parameters such as allocation sizes and the ratio of allocations to frees etc.
- Parametrization: Parameters (described above) should be accessible to set up via CLI options
Current Status:
- We have rally limited amount of benchmark using the
ubenchframework, which has limited functionality.
Proposal:
- Migrate to Google Benchmark:
- Offers more features and is "an industry standard".
- Similar to
GTEST, which is already in use. - A lot of features which we would to implement while sticking to ubench, is included out of the box.
- Implement varied set of benchmarks which allows to test main functionality of UMF in each possible configuration.
- Stretch Goal:
- Define benchmarks through configuration files to allow easier benchmarking of multiple cases without code changes.
2. Real Use Benchmarks
Objective: Benchmark UMF in real-world applications to assess performance in practical scenarios.
- Approach:
- Use applications that utilize UMF directly or through a proxy library.
- Preferably select applications with existing benchmark suites.
Current Need:
- A list of potential applications is yet to be compiled.
- Request: Suggestions for suitable applications are welcome.
Performance Testing Framework
We plan to employ GitHub Action Benchmark to automate performance testing.
Features:
- Parses test results and generates performance reports.
- Stores archival reports(on GitHub Pages).
- Generates charts displaying performance metrics over time (commits on the X-axis, metrics on the Y-axis). which are available on GitHub Pages.
Testing Strategy
- Pull Requests (PRs):
- Run a selected set of benchmarks for each PR.
- Compare performance against the
mainbranch. - Optionally fail the workflow if performance degrades beyond a set threshold.
- Main Branch Commits:
- Run a broader selection of benchmarks on each push to the
mainbranch. - Update performance archive to serve as a reference for future PRs.
- Update gh-page with a performance charts for a new commit.
- Run a broader selection of benchmarks on each push to the
Next Steps
To implement this performance testing plan, I will begin by migrating existing benchmarks from ubench to Google Benchmark.
And integration GitHub Action Benchmark with our GHa CI/CD. When this will be complied, we will start extending list of artificial benchmarks, along this identifying Real Use one.
Along with this performance testing task we are planning to introduce CTL. CTL is an interface for examination and modification - it will be useful to read some internal statistic from providers/pools, which can be used as additional performance counters. More details about ctl will be provided in the separate issue.
Feedback Requested
We welcome any input on the following:
- Suggestions for real-world applications to include in our benchmarks.
- Ideas to enhance the benchmarking and performance testing process.
- Feedback on the proposed migration to Google Benchmark.