-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VITIS-12882 Add limits for benchmarks to specify pass criteria #8441
Conversation
Signed-off-by: AShivangi <shivangiagarwal53@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems good to me! Thank you for doing additional code cleanup, too 👍
If every benchmark has a threshold, is it possible to move the m_threshold to TestRunner class and let it set the value in its ctor/initializer instead of each test doing it separately? |
I think that needs to come as a part of cleanup, which we can probably do with run recipe implementation. So currently, A lot of the functions in these tests rely on passing in ptree as a parameter and I do see an opportunity for refactor in the future. |
I agree this needs some level of refactoring. But to start, we can still move m_threshold to TestRunner class and change "find_threshold" to "set_threshold" (from getter to setter type). This also forces in a way any new benchmark to take care of "threshold" setting as well. |
Signed-off-by: AShivangi <shivangiagarwal53@gmail.com>
Makes sense! I've made m_threshold a member variable :) |
Signed-off-by: AShivangi <shivangiagarwal53@gmail.com>
Signed-off-by: AShivangi <shivangiagarwal53@gmail.com>
Problem solved by the commit
VITIS-12882 Add limits for benchmarks to specify pass criteria
Spec page can be found here
Bug / issue (if any) fixed, which PR introduced the bug, how it was discovered
New feature
Currently, we pass all the benchmarks in validate. This PR compares the benchmark value to the threshold/expected value for a given device and based on that decides if the test has passed or failed.
How problem was solved, alternative solutions (if any) and why they were rejected
A json will be added per platform to VTD (benchmark_devid_rev_id.json) which will be used for comparison
Risks (if any) associated the changes in the commit
Low: might print out some unexpected messages, but the functionality will not be compromised
What has been tested and how, request additional testing if necessary
Tested all possible scenarios on Strix/MCDM. Please refer to the spec page for expected behavior
Documentation impact (if any)
N/A