Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

TST: Add failing test #26522

Closed
wants to merge 2 commits into from
Closed

TST: Add failing test #26522

wants to merge 2 commits into from

Conversation

larsoner
Copy link
Contributor

This is a bug-report-as-PR -- MNE-Python main just started failing with:

mne/viz/topo.py:496: in _imshow_tfr_unified
    ax.imshow(
...
/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/matplotlib/artist.py:766: in set_clip_box
    if clipbox != self.clipbox:
E   ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

So passing bbox-like used to work but now doesn't (i.e., the test added here should fail). cc @oscargus as this seems to have been introduced by #26326

Not sure if the correct solution here is to cast to BboxBase if not an instance of one or something... but at least in MNE-Python we can work around for now by passing tuple(bbox) or so, so no rush from our end. 馃し

@oscargus oscargus added this to the v3.8.0 milestone Aug 14, 2023
@oscargus oscargus added the Release critical For bugs that make the library unusable (segfaults, incorrect plots, etc) and major regressions. label Aug 14, 2023
@oscargus
Copy link
Contributor

I guess the easiest is just to revert that change for set_clipbox. Probably not worthwhile to add logic to check it in a more complicated way

Will add a PR tomorrow.

@oscargus
Copy link
Contributor

Reverting that change leads to:

    def make_image(self, renderer, magnification=1.0, unsampled=False):
        # docstring inherited
        trans = self.get_transform()
        # image is created in the canvas coordinate.
        x1, x2, y1, y2 = self.get_extent()
        bbox = Bbox(np.array([[x1, y1], [x2, y2]]))
        transformed_bbox = TransformedBbox(bbox, trans)
>       clip = ((self.get_clip_box() or self.axes.bbox) if self.get_clip_on()
                else self.figure.bbox)
E       ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

The line with the new error goes back to 3.3.

Based on the doc-string, only Bbox and None are supported. That only Bbox is supported goes about ten years back.

When did this work most recently? Is it possible that the added test is not similar enough to your use case?

@larsoner
Copy link
Contributor Author

When did this work most recently? Is it possible that the added test is not similar enough to your use case?

It "works" on the latest release version, the error only popped up in our pip-pre run that uses scientific-python-nightly-wheels:

https://github.com/mne-tools/mne-python/actions/runs/5858016804/job/15881180313#step:17:5090

I put "works" in scare quotes because if I revert to matplotlib 3.7 locally and check the type it's np.ndarray, and it does not raise an error. But if I then force MNE-Python to draw during image creation with a ax.figure.canvas.draw_idle() and at the MPL end:

  1. Hack into matplotlib/image.py and add some print statements to __init__.py:
    print('\ninitializing')
    print(type(self))
    print(kwargs.get('clip_box', None))
    print(self.get_clip_box())
    
  2. Add a @property for clipbox with @clipbox.setter so I can see who sets it:
    @clipbox.setter
    def clipbox(self, clip_box):
        import inspect
        print('\nsetting')
        print(inspect.stack()[1])
        print(clip_box)
        self._clipbox = clip_box
    
  3. Add some print statements to make_image (plus a RuntimeError to make it quit after the conditional in make_image):
    print('making')
    print(self.get_clip_box())
    print(self.axes.bbox)
    

It looks like that in 3.7 the clip_box we pass never actually gets used, because it gets overwritten by set_clip_path:

Print statements
$ pytest mne/viz/tests/test_topo.py -xsk test_plot_tfr_topo
...
mne/viz/tests/test_topo.py 
setting
FrameInfo(frame=<frame at 0x7f652914aae0, file '/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', line 191, code __init__>, filename='/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', lineno=191, function='__init__', code_context=['        self.clipbox = None\n'], index=0, positions=Positions(lineno=191, end_lineno=191, col_offset=8, end_col_offset=20))
None

setting
FrameInfo(frame=<frame at 0x7f652914aae0, file '/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', line 774, code set_clip_box>, filename='/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', lineno=774, function='set_clip_box', code_context=['        self.clipbox = clipbox\n'], index=0, positions=Positions(lineno=774, end_lineno=774, col_offset=8, end_col_offset=20))
[0.06488496 0.62804889 0.03376805 0.02814004]

initializing
<class 'matplotlib.image.AxesImage'>
[0.06488496 0.62804889 0.03376805 0.02814004]
[0.06488496 0.62804889 0.03376805 0.02814004]

setting
FrameInfo(frame=<frame at 0x7f65297c6a40, file '/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', line 808, code set_clip_path>, filename='/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', lineno=808, function='set_clip_path', code_context=['                self.clipbox = TransformedBbox(Bbox.unit(),\n'], index=0, positions=Positions(lineno=808, end_lineno=808, col_offset=16, end_col_offset=28))
TransformedBbox(
    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),
    CompositeGenericTransform(
        CompositeGenericTransform(
            BboxTransformTo(
                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),
            Affine2D().scale(1.0)),
        BboxTransformTo(
            TransformedBbox(
                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),
                BboxTransformTo(
                    TransformedBbox(
                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),
                        Affine2D().scale(100.0)))))))
making
TransformedBbox(
    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),
    CompositeGenericTransform(
        CompositeGenericTransform(
            BboxTransformTo(
                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),
            Affine2D().scale(1.0)),
        BboxTransformTo(
            TransformedBbox(
                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),
                BboxTransformTo(
                    TransformedBbox(
                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),
                        Affine2D().scale(100.0)))))))
TransformedBbox(
    Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),
    BboxTransformTo(
        TransformedBbox(
            Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),
            Affine2D().scale(100.0))))
F

...
mne/viz/topo.py:508: in _imshow_tfr_unified
    ax.figure.canvas.draw_idle()
../virtualenvs/base/lib/python3.11/site-packages/matplotlib/backend_bases.py:2082: in draw_idle
    self.draw(*args, **kwargs)
...
../virtualenvs/base/lib/python3.11/site-packages/matplotlib/image.py:641: in draw
    im, l, b, trans = self.make_image(
../virtualenvs/base/lib/python3.11/site-packages/matplotlib/image.py:968: in make_image
    raise RuntimeError
E   RuntimeError

So in other words, ndarray was never actually used / checked. So this makes me think this could be closed...

But a follow-up question arises: passing tuple(my_ndarray_clip_box) also does not error, even on main. It seems like it should, but maybe somewhere in there it casts the tuple, but at least looking at the 3.7 code it doesn't look like it. Making the same changes on main and passing clip_box=tuple(my_ndarray) I get:

Using tuple on main
mne/viz/tests/test_topo.py 
setting
FrameInfo(frame=<frame at 0x7fc02cf24f60, file '/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', line 191, code __init__>, filename='/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', lineno=191, function='__init__', code_context=['        self.clipbox = None\n'], index=0, positions=Positions(lineno=191, end_lineno=191, col_offset=8, end_col_offset=20))
None

setting
FrameInfo(frame=<frame at 0x7fc02cf24f60, file '/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', line 767, code set_clip_box>, filename='/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', lineno=767, function='set_clip_box', code_context=['            self.clipbox = clipbox\n'], index=0, positions=Positions(lineno=767, end_lineno=767, col_offset=12, end_col_offset=24))
(np.float64(0.06488496453324114), np.float64(0.6280488934398878), np.float64(0.03376805371721399), np.float64(0.028140044764344993))

initializing
<class 'matplotlib.image.AxesImage'>
(np.float64(0.06488496453324114), np.float64(0.6280488934398878), np.float64(0.03376805371721399), np.float64(0.028140044764344993))
(np.float64(0.06488496453324114), np.float64(0.6280488934398878), np.float64(0.03376805371721399), np.float64(0.028140044764344993))

setting
FrameInfo(frame=<frame at 0x7fc02d5d7440, file '/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', line 801, code set_clip_path>, filename='/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', lineno=801, function='set_clip_path', code_context=['                self.clipbox = TransformedBbox(Bbox.unit(),\n'], index=0, positions=Positions(lineno=801, end_lineno=801, col_offset=16, end_col_offset=28))
TransformedBbox(
    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),
    CompositeGenericTransform(
        CompositeGenericTransform(
            BboxTransformTo(
                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),
            Affine2D().scale(1.0)),
        BboxTransformTo(
            TransformedBbox(
                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),
                BboxTransformTo(
                    TransformedBbox(
                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),
                        Affine2D().scale(100.0)))))))
making
TransformedBbox(
    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),
    CompositeGenericTransform(
        CompositeGenericTransform(
            BboxTransformTo(
                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),
            Affine2D().scale(1.0)),
        BboxTransformTo(
            TransformedBbox(
                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),
                BboxTransformTo(
                    TransformedBbox(
                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),
                        Affine2D().scale(100.0)))))))
TransformedBbox(
    Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),
    BboxTransformTo(
        TransformedBbox(
            Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),
            Affine2D().scale(100.0))))

No idea if this is doing the right thing or not (my head is spinning trying to interpret the flow)... But it seems like you should get a TypeError that the clip_box is not the correct type at the very least in the ndarray case, maybe also the tuple case. So maybe that's the right approach here rather than reverting #26326 ?

@oscargus
Copy link
Contributor

But it seems like you should get a TypeError that the clip_box is not the correct type

Yes, the type checking is typically quite limited in most set_*-methods all over the code base.

Will try to understand the rest of the very nice testing you provided and see if there is something else hiding here. The overwriting from set_clip_path and that the original clip_box wasn't used seems at least at first glance as something that require more investigation. Thanks!

@ksunden
Copy link
Member

ksunden commented Aug 18, 2023

ndarray errors because numpy dissallows if ndarray(..., dtype=bool): as ambiguous (and gives an array, not a bool for != comparisons.

The tuple has no such protections, and thus evaluates as False, which is not an error condition, but it remains not the correct type to be used as a BBox

Now, bboxes do implement __array__, which returns [[x1, y1], [x2, y2]], the bounds as defined by the points, and so code (perhaps most particularly C code) may work in some edge cases (though even then it is a 2D array, so not quite the same as the tuple form provided here, though C code may ignore the given shape and thus work to some extent). However, it is documented as being expected to be a BBox, not a generic sequence, and so there are quite likely parts of the code that do rely on that fact.

Unfortunately the relationships of these arguments which stem from set_* methods on Artist is rather difficult to document effectively... They are not super common to use and there are many that would make our already long and hard to know what is relevant to you docstrings of methods like imshow even longer and harder to figure out what is relevant. But equally, when it is relevant, it would have been nice to see the expected type there. (Not to mention that these are added based on the presence of a set_ method, which adds a layer of complexity if we were to add them to the docstring)

Ultimately I will admit to a level of skepticism of the idea which appears to be at the center of wanting the clip box in the first place, which seems from my cursory glance at the code to be centered around using one Matplotlib.Axes object to serve as multiple logical axes by manually applying scaling factors and clip boxes. This is, in fact, the primary goal of Matplotlib Axes and the Transform stack. There are quite possibly some performance improvements to Matplotlib which alleviate (if not eliminate) the problems which caused that to be written in the first place. (Mind you, I have not run benchmarks, so I do not have anything empirical to back up the feeling that that is likely not as big of a gain as you would hope/intend, or potentially it once was) And further skepticism that it will continue to behave as expected for any interactive usage, or for resizing of the canvas (if all you do are static plots, that is certainly not a dealbreaker, just have a sense that it likely would behave strangely).

If the performance gains are still relevant, what you probably want is instead of tuple(pos), TransformedBbox(Bbox.from_extents(*pos), ax.transAxes). (Please note, I have not tested, and please ensure that the order of the position bounds matches your expectation, it is left, bottom, right, top for from_extents)

@larsoner
Copy link
Contributor Author

(Mind you, I have not run benchmarks, so I do not have anything empirical to back up the feeling that that is likely not as big of a gain as you would hope/intend, or potentially it once was) And further skepticism that it will continue to behave as expected for any interactive usage, or for resizing of the canvas (if all you do are static plots, that is certainly not a dealbreaker, just have a sense that it likely would behave strangely).

At the time we wrote the core of the code multiple Axes were indeed way slower, but it was also a long time ago. Certainly we could/should revisit better ways of doing what we want. And the clip_box I don't think ever worked... someday hopefully we'll do a nicer job :)

@ksunden
Copy link
Member

ksunden commented Sep 4, 2023

As I think we've determined that this is okay to error, just that the error message could perhaps be better/sooner, I'm going to go ahead and close this as not a release blocker.

@ksunden ksunden closed this Sep 4, 2023
@ksunden ksunden removed the Release critical For bugs that make the library unusable (segfaults, incorrect plots, etc) and major regressions. label Sep 4, 2023
@oscargus oscargus mentioned this pull request Sep 5, 2023
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants