Skip to content

TST: enable CPython free-threading support #468

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

andfoy
Copy link

@andfoy andfoy commented Feb 18, 2025

This PR enables free-threading builds of bottleneck, available from Python 3.13 onwards, this new distribution allows Python packages to leverage true parallel support, as the GIL is now optional.

In particular, this PR performs the following checks:

  • Test the package using pytest-run-parallel, which allows pytest suites to be run concurrently (per test). This analysis didn't throw any major concurrency-related issues, the only tests that were marked as thread-unsafe were related to memory usage checking and concurrent file overwriting.
  • Add a CI to run tests in parallel, as well to build and release free-threaded wheels
  • Check package compliance using Thread Sanitizer (TSAN)
  • Marking each C module as free-threaded compatible

@andfoy andfoy changed the title TST: test bottleneck against pytest-run-parallel TST: enable CPython free-threading support Mar 19, 2025
Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @andfoy. Mostly looks good, a few comments.

@rdbisme
Copy link
Collaborator

rdbisme commented Mar 29, 2025

I have approved the workflows, so now the CI will run here!

@andfoy andfoy force-pushed the test_pytest_run_parallel branch from c87fc40 to c3e5783 Compare March 31, 2025 17:51
Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @andfoy, this LGTM now. The changes are very minimal, and I verified locally too that everything works as advertised, both with and without pytest-run-parallel installed.

@andfoy
Copy link
Author

andfoy commented Apr 3, 2025

I confirm wheels are building locally: https://github.com/andfoy/bottleneck/actions/runs/14230390834

@andfoy
Copy link
Author

andfoy commented Apr 3, 2025

I need to push a final commit, and it should be ready to go

@rdbisme
Copy link
Collaborator

rdbisme commented Apr 3, 2025

Why are we skipping muslinux x86 builds?

@rgommers
Copy link
Collaborator

rgommers commented Apr 4, 2025

Why are we skipping muslinux x86 builds?

I bet it's because numpy doesn't provide cp313t_*_i686 wheels (see https://pypi.org/project/numpy/2.2.4/#files), so (a) they won't be useful, and (b) building numpy for cp3113t in CI here can't be done fully automatically within the cibuildwheel invocation until there's a Cython 3.1.0 release.

@andfoy
Copy link
Author

andfoy commented Apr 4, 2025

I bet it's because numpy doesn't provide cp313t_*_i686 wheels

This is the actual issue at hand, since the job is only failing on free-threaded builds under x86, I decided to skip them for now.

@rgommers
Copy link
Collaborator

rgommers commented Apr 6, 2025

@andfoy the failure caught a problem. It looks like a cross-merge conflict because filterwarnings = ["error"] was added in gh-473. I think the underlying reason is that the warnings.catch_warnings usage is wrapped inside the unit_maker utility in bottleneck/tests/reduce_test.py, and hence pytest-run-parallel doesn't automatically run that test single-threaded. Could you verify, and also check that the same issue isn't present in other test files?

Keeping the traceback for posterity:

_____________________________ test_reduce[nanmean] _____________________________

func = <built-in function nanmean>

    @pytest.mark.parametrize("func", bn.get_functions("reduce"), ids=lambda x: x.__name__)
    def test_reduce(func):
        """test reduce functions"""
>       return unit_maker(func)

func       = <built-in function nanmean>

/opt/hostedtoolcache/Python/3.13.2/x64-freethreaded/lib/python3.13t/site-packages/bottleneck/tests/reduce_test.py:17: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

func = <built-in function nanmean>, decimal = 5, skip_dtype = ('nansum', 'ss')

    def unit_maker(func, decimal=5, skip_dtype=("nansum", "ss")):
        """Test that bn.xxx gives the same output as bn.slow.xxx."""
        fmt = "\nfunc %s | input %s (%s) | shape %s | axis %s | order %s\n"
        fmt += "\nInput array:\n%s\n"
        name = func.__name__
        func0 = eval("bn.slow.%s" % name)
        for i, a in enumerate(arrays(name)):
            if a.ndim == 0:
                axes = [None]  # numpy can't handle e.g. np.nanmean(9, axis=-1)
            else:
                axes = list(range(-1, a.ndim)) + [None]
            for axis in axes:
                actual = "Crashed"
                desired = "Crashed"
                actualraised = False
                try:
                    # do not use a.copy() here because it will C order the array
                    actual = func(a, axis=axis)
                except:  # noqa
                    actualraised = True
                desiredraised = False
                try:
                    with warnings.catch_warnings():
                        warnings.simplefilter("ignore")
                        desired = func0(a, axis=axis)
                except:  # noqa
                    desiredraised = True
                if actualraised and desiredraised:
                    pass
                else:
                    tup = (
                        name,
                        "a" + str(i),
                        str(a.dtype),
                        str(a.shape),
                        str(axis),
                        array_order(a),
                        a,
                    )
                    err_msg = fmt % tup
                    if actualraised != desiredraised:
                        if actualraised:
                            fmt2 = "\nbn.%s raised\nbn.slow.%s ran\n\n%s"
                        else:
                            fmt2 = "\nbn.%s ran\nbn.slow.%s raised\n\n%s"
                        msg = fmt2 % (name, name, traceback.format_exc())
                        err_msg += msg
>                       assert False, err_msg
E                       AssertionError: 
E                         func nanmean | input a80 (int32) | shape (0, 0) | axis 0 | order C,F
E                         
E                         Input array:
E                         []
E                         
E                         bn.nanmean ran
E                         bn.slow.nanmean raised
E                         
E                         NoneType: None
E                         
E                       assert False

a          = array([], shape=(0, 0), dtype=int32)
actual     = array([], dtype=float64)
actualraised = False
axes       = [-1, 0, 1, None]
axis       = 0
da         = dtype('float64')
dd         = dtype('float64')
decimal    = 5
desired    = 'Crashed'
desiredraised = True
err_msg    = '\nfunc nanmean | input a80 (int32) | shape (0, 0) | axis 0 | order C,F\n\nInput array:\n[]\n\nbn.nanmean ran\nbn.slow.nanmean raised\n\nNoneType: None\n'
fmt        = '\nfunc %s | input %s (%s) | shape %s | axis %s | order %s\n\nInput array:\n%s\n'
fmt2       = '\nbn.%s ran\nbn.slow.%s raised\n\n%s'
func       = <built-in function nanmean>
func0      = <function nanmean at 0x2eec1e3c3f0>
i          = 80
msg        = '\nbn.nanmean ran\nbn.slow.nanmean raised\n\nNoneType: None\n'
name       = 'nanmean'
skip_dtype = ('nansum', 'ss')
tup        = ('nanmean', 'a80', 'int32', '(0, 0)', '0', 'C,F', ...)

/opt/hostedtoolcache/Python/3.13.2/x64-freethreaded/lib/python3.13t/site-packages/bottleneck/tests/reduce_test.py:67: AssertionError

@andfoy
Copy link
Author

andfoy commented Apr 7, 2025

I confirm that bottleneck/tests/nonreduce_axis_test.py also recycles the unit_maker as well, however, they can be marked automatically as thread-unsafe when using Quansight-Labs/pytest-run-parallel#37

@rgommers
Copy link
Collaborator

@andfoy now that the new pytest-run-parallel release is out, can you update this?

@andfoy andfoy force-pushed the test_pytest_run_parallel branch from ed8e058 to 509050d Compare April 25, 2025 16:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants