Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

numpy 2 test workflow failures #4309

Closed
Remi-Gau opened this issue Mar 4, 2024 · 7 comments · Fixed by #4350
Closed

numpy 2 test workflow failures #4309

Remi-Gau opened this issue Mar 4, 2024 · 7 comments · Fixed by #4350

Comments

@Remi-Gau
Copy link
Collaborator

Remi-Gau commented Mar 4, 2024

See here:
https://github.com/nilearn/nilearn/actions/runs/8137823574/job/22237131770?pr=4306#step:6:5109

seem to be a more of a pandas 3.0

FAILED nilearn/datasets/tests/test_func.py::test_fetch_surf_nki_enhanced
FAILED nilearn/plotting/tests/test_matrix_plotting.py::test_show_event_plot

Errors:

("'DataFrame' object has no attribute '_mgr'")
ValueError: Unable to avoid copy while creating an array from given array.

See below for an update

@Remi-Gau
Copy link
Collaborator Author

Remi-Gau commented Mar 6, 2024

Seems that one of the runs of the numpy2 workflow passed recently on main:

https://github.com/nilearn/nilearn/actions/runs/8168896123

Rerunning some of the failed ones to see...

@bthirion
Copy link
Member

bthirion commented Mar 8, 2024

It still fails from time to time.

@Remi-Gau
Copy link
Collaborator Author

Remi-Gau commented Mar 8, 2024

It mostly fails though.

Will try to see what combination of versions of our "bleeing-edge" dependencies makes those tests fail.

Also realised that the workflow is set to run in scheduled manner on main, but not with push on main (not a big problem) but it is weird that all the other tests are run when merging a PR on main but not those...

@emdupre
Copy link
Member

emdupre commented Mar 11, 2024

When I last checked, it seemed like the failures had migrated and were not specifically on the two tests linked in the top comment. As of three days ago, it now looks like this:

test_numpy2: commands[3]> pytest nilearn
ImportError while loading conftest '/home/runner/work/nilearn/nilearn/nilearn/conftest.py'.
nilearn/conftest.py:10: in <module>
    from nilearn import image
nilearn/image/__init__.py:6: in <module>
    from .image import (
nilearn/image/image.py:20: in <module>
    from .. import signal
nilearn/signal.py:13: in <module>
    import pandas as pd
.tox/test_numpy2/lib/python3.12/site-packages/pandas/__init__.py:46: in <module>
    from pandas.core.api import (
.tox/test_numpy2/lib/python3.12/site-packages/pandas/core/api.py:1: in <module>
    from pandas._libs import (
.tox/test_numpy2/lib/python3.12/site-packages/pandas/_libs/__init__.py:18: in <module>
    from pandas._libs.interval import Interval
interval.pyx:1: in init pandas._libs.interval
    ???
E   ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
test_numpy2: exit 4 (1.31 seconds) /home/runner/work/nilearn/nilearn> pytest nilearn pid=2301

The explanations I've found online suggest that this is because the module pandas is build on different version of numpy. It looks like this is something pandas is already tracking and it's just a matter of syncing up the releases.

EDIT: And in case we want to track it, here's the current numpy 2.0 timelines.

@Remi-Gau
Copy link
Collaborator Author

Thanks for tracking this @emdupre !!!

Will update the top message.

@Remi-Gau
Copy link
Collaborator Author

Remi-Gau commented Apr 2, 2024

I would suggest reverting the systematic numpy2 checks on every PR because this is adding noise to our review process (in most cases the failures are unrelated to the PR).

This workflow was run on every PR because not all members of the dev team were notified in case of failures.

I would suggest something I have seen in another repo (can't find it at the moment):

  • rename the "numpy2" workflow to "nightly-build dependencies" because it actually tries to run on after installing all the dependencies in their 'bleeding edge' version
  • run the workflow at regular interval
  • if it fails open an issue (unless one is already open) and ping the dev team

@bthirion
Copy link
Member

bthirion commented Apr 2, 2024

Sounds good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants