New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support for "expected to fail" tests ? #35395
Comments
assign core |
New categories assigned: core @Dr15Jones,@smuzaffar,@makortel you have been requested to review this Pull request/Issue and eventually sign? Thanks |
A new Issue was created by @fwyzard Andrea Bocci. @Dr15Jones, @perrotta, @dpiparo, @makortel, @smuzaffar, @qliphy can you please review it and eventually sign/assign? Thanks. cms-bot commands are listed here |
@fwyzard , I really do not understand what you are asking for. Instead of adding new category for tests , should not a test decide itself that in which conditions it should fail? If it helps , we can add a startup script e.g. |
@smuzaffar let me give a concrete example. Given the current test system (a test can only succeed == good or fail == bad) the tests that require CUDA have been written to fail gracefully ( This means that we have no way of knowing, looking only at the test results, if for a given architecture CUDA is ok and the test ran successfully, or if CUDA is broken and all CUDA-related tests are simply doing I understand that the reason is that e only consider two possible outcomes for each test:
Other projects (e.g. gcc, llvm/clang, etc.) have more possible outcomes for each test:
If we had this kind of support, we could let tests fail if the system doesn't meet their requirements (e.g. CUDA is not configured or a GPU is not present), and mark them as
|
for sure I am not getting the point here :-) should not the two outcome enough to cover all 4 of these e.g. following two should be marked as
and following two should be marked as
Now if a test dynamically decides at runtime that it should fail or pass ( e.g depending on the env it is running, availability of GPU or CUDA driver compatibility ) then this should be handled by test itself. For GPU related tests, I would still suggest a startup script/test which decides if a test should pass or fail. |
In few places in CMSSW we have unit tests that fail gracefully (
exit 0
) under some known conditions.A couple of examples:
Maybe we should consider adding a new category of "expected failures" for the tests ?
That is, let tests fail if they need to, but keep a list of tests that are expected to do so, and not consider those as real failures in the IBs/PRs tests.
The text was updated successfully, but these errors were encountered: