-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[cppyy] Review xfail'ed tests #20906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
519d892 to
5f370c9
Compare
Test Results 22 files 22 suites 3d 11h 21m 54s ⏱️ Results for commit ae95b68. ♻️ This comment has been updated with latest results. |
e36515c to
16ab12d
Compare
vepadulano
reviewed
Jan 16, 2026
400ec11 to
e25b499
Compare
xfail-ed tests as "strict"00e1820 to
e81987f
Compare
All xfail'ed cppyy tests have been reviewed, and the ones that are
actually failing are now marked as `xfail(strict=True)`, so the test
fails if it unexpectedly passes.
This gives us a very useful baseline for what works now in ROOT and what
doesn't, and we'll learn when our developments like the cppyy upgrade on
the CppInterOp-based version will fix some tests.
There are some tests remaining that can't be run because they crash, and
therefore the `strict` mode doesn't make sense there. But the number of
these tests is not high, and now we always give a reason for why they
crash:
```txt
git grep xfail | grep -v "strict"
```
```txt
test_concurrent.py: @mark.xfail(run=False, reason="Crashes because TClingCallFunc generates wrong code")
test_concurrent.py: @mark.xfail(run=False, reason="Crashes because the interpreter emits too many warnings")
test_concurrent.py: @mark.xfail(run=False, reason="segmentation violation")
test_cpp11features.py: @mark.xfail(run=False, reason = "Crashes")
test_datatypes.py: @mark.xfail(run=False, reason="segmentation violation")
test_datatypes.py: @mark.xfail(run=False, reason="error code: Subprocess aborted")
test_doc_features.py: @mark.xfail(run=False, condition=WINDOWS_BITS == 64, reason = "Crashes on Windows 64 bit")
test_fragile.py: @mark.xfail(run=False, condition=has_asserts(),
test_fragile.py: @mark.xfail(run=False, reason="Fatal Python error: Aborted")
test_fragile.py: @mark.xfail(run=False, reason="Fatal Python error: Aborted")
test_fragile.py: @mark.xfail(run=False, condition=is_modules_off(), reason="Crashes on build with modules off: Fatal Python error: Segmentation fault")
test_lowlevel.py: @mark.xfail(run=False, condition=IS_WINDOWS, reason="Windows fatal exception: access violation")
test_regression.py: @mark.xfail(run=WINDOWS_BITS != 64, condition=IS_MAC_ARM | WINDOWS_BITS == 64, reason = "Crashes on Windows 64 bit and fails macOS ARM with" \
test_stltypes.py: @mark.xfail(run=False, reason="Fatal Python error: Segmentation fault")
test_stltypes.py: @mark.xfail()
test_stltypes.py: @mark.xfail(run=False, condition=WINDOWS_BITS == 64, reason="Crashes on Windows 64 bit")
test_templates.py: @mark.xfail(run=False, reason="error code: Subprocess aborted")
```
A future development could be to ensure in Cling or cppyy that these
tests at least don't crash but fail gracefully, but that's better to be
done after the cppyy upgrade because some tests might be fixed anyway.
Some tests remain skipped for good reasons:
```txt
git grep "mark\.skip"
```
```txt
test_boost.py:@mark.skipif(noboost == True, reason="boost not found")
test_boost.py:@mark.skipif(noboost == True, reason="boost not found")
test_boost.py:@mark.skipif(noboost == True, reason="boost not found")
test_boost.py:@mark.skipif(noboost == True, reason="boost not found")
test_boost.py: @mark.skipif(noboost, reason="boost not found")
test_eigen.py:@mark.skipif(eigen_path is None, reason="Eigen not found")
test_eigen.py:@mark.skipif(eigen_path is None, reason="Eigen not found")
test_fragile.py: @mark.skip(reason="This test is very verbose since it sets gDebug to True")
test_fragile.py: @mark.skip(reason="Not actually a cppyy test")
test_fragile.py: @mark.skipif(not has_cpp_20(), reason="std::span requires C++20")
test_leakcheck.py:@mark.skipif(nopsutil == True, reason="module psutil not installed")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs")
test_numba.py:@mark.skipif(has_numba == False, reason="numba not found")
test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically")
test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically.")
test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically.")
test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically.")
test_numba.py:@mark.skipif(has_numba == False, reason="numba not found")
test_eigen.py: @mark.skipif(eigen_path is None, reason="Eigen not found")
test_pythonify.py: @mark.skip(reason="Garbage collection tests are fragile")
test_regression.py: @mark.skip(reason="For ROOT, we don't enable AVX by default ('-mavx' is not passed to Cling)")
test_stltypes.py:@mark.skipif(not has_cpp_20(), reason="std::span requires C++20")
```
Sorry that these `git grep` commands don't show the names of the tests!
They are to illustrate how many tests are still marked as `skip`/`xfail`
without `strict=True`.
Closes root-project#20085.
e81987f to
ae95b68
Compare
vepadulano
approved these changes
Jan 19, 2026
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
All xfail'ed cppyy tests have been reviewed, and the ones that are
actually failing are now marked as
xfail(strict=True), so the testfails if it unexpectedly passes.
This gives us a very useful baseline for what works now in ROOT and what
doesn't, and we'll learn when our developments like the cppyy upgrade on
the CppInterOp-based version will fix some tests.
There are some tests remaining that can't be run because they crash, and
therefore the
strictmode doesn't make sense there. But the number ofthese tests is not high, and now we always give a reason for why they
crash:
A future development could be to ensure in Cling or cppyy that these
tests at least don't crash but fail gracefully, but that's better to be
done after the cppyy upgrade because some tests might be fixed anyway.
Some tests remain skipped for good reasons:
test_boost.py:@mark.skipif(noboost == True, reason="boost not found") test_boost.py:@mark.skipif(noboost == True, reason="boost not found") test_boost.py:@mark.skipif(noboost == True, reason="boost not found") test_boost.py:@mark.skipif(noboost == True, reason="boost not found") test_boost.py: @mark.skipif(noboost, reason="boost not found") test_eigen.py:@mark.skipif(eigen_path is None, reason="Eigen not found") test_eigen.py:@mark.skipif(eigen_path is None, reason="Eigen not found") test_fragile.py: @mark.skip(reason="This test is very verbose since it sets gDebug to True") test_fragile.py: @mark.skip(reason="Not actually a cppyy test") test_fragile.py: @mark.skipif(not has_cpp_20(), reason="std::span requires C++20") test_leakcheck.py:@mark.skipif(nopsutil == True, reason="module psutil not installed") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_leakcheck.py: @mark.skip(reason="disabled due to its sporadic nature, especially fragile on VMs") test_numba.py:@mark.skipif(has_numba == False, reason="numba not found") test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically") test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically.") test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically.") test_numba.py: @mark.skip(reason="Numba tests comparing execution times are sensitive and fail sporadically.") test_numba.py:@mark.skipif(has_numba == False, reason="numba not found") test_eigen.py: @mark.skipif(eigen_path is None, reason="Eigen not found") test_pythonify.py: @mark.skip(reason="Garbage collection tests are fragile") test_regression.py: @mark.skip(reason="For ROOT, we don't enable AVX by default ('-mavx' is not passed to Cling)") test_stltypes.py:@mark.skipif(not has_cpp_20(), reason="std::span requires C++20")Sorry that these
git grepcommands don't show the names of the tests!They are to illustrate how many tests are still marked as
skip/xfailwithout
strict=True.Closes #20085.