-
Notifications
You must be signed in to change notification settings - Fork 808
[Benchmarks] Metadata generator #20294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the metadata generation for Compute Benchmarks by introducing an automated metadata generator. The changes centralize metadata creation logic and ensure consistency between benchmark group membership and metadata definitions.
- Introduces a new
ComputeMetadataGenerator
class to automatically generate metadata based onBenchmark.explicit_group
- Updates type annotations for optional fields in
BenchmarkMetadata
to use union syntax - Replaces manual metadata creation with automated generation in the Compute suite
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
File | Description |
---|---|
devops/scripts/benchmarks/utils/result.py | Updates type annotations for range fields to use explicit union syntax |
devops/scripts/benchmarks/benches/compute_metadata.py | Adds new metadata generator class with centralized group metadata definitions |
devops/scripts/benchmarks/benches/compute.py | Refactors to use automated metadata generation instead of manual creation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to see some test that executes this code, while we do a change. To not rely on manual checks. But UTs, if any, we can add in other PR.
806bbea
to
1a46a5e
Compare
1a46a5e
to
ff00adc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you, good job
Co-authored-by: Łukasz Ślusarczyk <lukasz.slusarczyk@intel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@intel/llvm-gatekeepers, please merge. Failing e2e test is unrelated to this benchmarks framework change. This change should fix missing charts issue in the dashboard. Benchmark run CI job passed. |
@intel/llvm-gatekeepers , could you please merge this? Changes are OK, CI passes and this is an important fix that makes benchmarks plots working again. |
Generate Compute Benchmarks scenarios' metadata automatically based on
Benchmark.explicit_group
.