New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Horizontal Plotting using widgets #3140
Conversation
Another note is that a preference can be set with |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## RELEASE_next_major #3140 +/- ##
======================================================
+ Coverage 81.25% 81.27% +0.02%
======================================================
Files 176 176
Lines 24376 24401 +25
Branches 5675 5683 +8
======================================================
+ Hits 19807 19833 +26
+ Misses 3266 3261 -5
- Partials 1303 1307 +4
☔ View full report in Codecov by Sentry. |
Another useful example might be something like this where an interactive sum of some area in the sample is created and plotted beside the signal and the navigator.
|
For some reason it also appears that it fails with python 3.8 Edit: It appears that the newest version of ipython was released earlier today and doesn't support python 3.8 and lower. |
6726836
to
bc60dbc
Compare
@ericpre Any ideas of how to test something like this? |
Add a notebook, run it and check some of the output? |
That was what I was thinking but it does come with some weird challenges so I was curious what you would suggest. That being said testing jupyter notebooks is much much better than it was a couple of years ago. The reviewNB/treon app has some testing which tests output for some notebook which might be a good way to start testing. I would recommend adding the review-notebook-app for visualizing diffs as well. It might be a good idea to test some other notebook functionality and widgets just to make sure there that the widgets and such are working properly. |
In principle, functionality depending on With checking some of the output, I meant: putting a few assertion in the same notebook to check that the output contain matplotlib canvas, check the navigation and signal plot are in the correct output, for example by checking the data in the matplotlib figure, etc. I suspect that https://github.com/computationalmodelling/nbval will be good because we aren't interested in the content of the notebook itself and actually it would be better to avoid depending on this, as we don't want the test to fail because of small changes in ipympl or matplotlib, etc. |
@ericpre Kind of is sounding like there is no great solution... What about a test like this? import hyperspy.api as hs
import numpy as np
import matplotlib
from contextlib import redirect_stdout
import io
def test_horizontal():
matplotlib.use("module://ipympl.backend_nbagg")
f = io.StringIO()
with redirect_stdout(f):
s = hs.signals.Signal2D(np.random.random((4, 4, 2, 2)))
s.plot(plot_style="vertical")
assert("HBox(children=(Output(layout=Layout(margin='auto 0px auto 0px')),"
" Output(layout=Layout(margin='auto 0px auto 0px')" in f.getvalue())
def test_vertical():
matplotlib.use("module://ipympl.backend_nbagg")
f = io.StringIO()
with redirect_stdout(f):
s = hs.signals.Signal2D(np.random.random((4, 4, 2, 2)))
s.plot(plot_style="vertical")
assert("VBox(children=(Output(layout=Layout(margin='0px auto 0px auto')),"
" Output(layout=Layout(margin='0px auto 0px auto')" in f.getvalue()) Kind of a weird sudo test. It will test that the code runs without error but it don't think it actually tests that anything is plotted or that the output is properly captured. It also tests to make sure that the properly plot style is handled. That way we can just add ipympl as an extra dependency for testing only. |
@ericpre Any idea why are Azure-pipeline tests are failing? |
There is a failure with one of the image comparison for the plotting test, which seem to be a regression from this PR: the output can be download at https://dev.azure.com/franciscode-la-pena-manchon/hyperspy/_build/results?buildId=3154&view=artifacts&pathAsName=false&type=publishedArtifacts |
@ericpre So I looked at the two images and there doesn't appear to be an These tests should be covered by the github actions as well, correct? And they are passing there. I'm just trying to wrap my head around what the problem could be. The error thrown is: def compare_image_to_baseline(self, item, fig, result_dir, summary=None):
"""
Compare a test image to a baseline image.
"""
from matplotlib.image import imread
from matplotlib.testing.compare import compare_images
if summary is None:
summary = {}
compare = get_compare(item)
tolerance = compare.kwargs.get('tolerance', 2)
savefig_kwargs = compare.kwargs.get('savefig_kwargs', {})
baseline_image_ref = self.obtain_baseline_image(item, result_dir)
test_image = (result_dir / "result.png").absolute()
> fig.savefig(str(test_image), **savefig_kwargs)
E AttributeError: 'NoneType' object has no attribute 'savefig' But it seems like the figure is saved to the output file. |
Image comparison only runs on azure pipeline and this is why it passes on github CI. The two images have different aspect ratio, which may be why there is no diff? |
Yea it does appear to be failing due to a mismatch in size. Apparently the test's don't actually run in that case so no diff is created. |
a76fed9
to
44ba4df
Compare
@ericpre This is good now I think. Or at least worth a pass through. I can clean up the commit history a little bit but that requires editing some of @thomasaarholt's work so I would rather not just for the sake of completeness. I rebased this onto RELEASE_next_major but it would be nice to have this earlier rather than later. Specifically there are a couple of presentations coming up in the next couple of months where this functionality would be very helpful to present and visualize data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @CSSFrancis for reviving this work. The approach is good but it needs tidying up!
hyperspy/drawing/mpl_he.py
Outdated
if plot_style is None: | ||
plot_style = preferences.Plot.widget_plot_style | ||
# If widgets do not already exist, we will `display` them at the end | ||
display_nav_widget_now = not navigator_widget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to defer the creating of the widget? If not, I would be better to make the flow more linear and avoid redirection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not exactly sure what you are asking here? Are you asking about setting the preferences and having the plot style? Or something related to calling creating the Output widget? or something else?
@ericpre it appears that when you plot using this method the button press events aren't going through. You can still use your cursor to move the selector around but none of the key strokes are being captured. Do you have any idea what that might be? My best guess is that the plot isn't actually being selected because it is wrapped in another widget which results in the button presses not being properly forwarded. |
Okay this is vastly over-complicated mostly because I was a little too stuck on the idea of using the This also seems to fix the issue where the keys aren't registering. |
Honestly sorry this took so long to reach a much cleaner solution, but pretty happy with how things ended up. I think we start with this and come back to see if we need something like capturing the output. That wouldn't be too hard, all we would have to do is return |
@ericpre is this PR good to merge? I'd like to use it at the sfmu in two weeks :-) |
@jlaehne It should be good to go now. I added in some documentation/changelog which were removed when I simplified things quite a bit. |
Just checked it out locally. Works like a charm. Only thing I noticed is that if I am too far zoomed in (to have large font sizes in a presentation), plots are starting to overlap and reach outside of the screen on the right (without a horizontal scroll bar) and then I don't have access to the resize feature in the lower right corner of the plot. |
I wanted to have another look but I don't have time at the moment, and don't wait for my review - I wanted to revisit the initial approach where it is possible to pass widget as argument but this can be done in another PR. |
@jlaehne I can change things so the plots will stack rather than overlap if you make your window smaller. I didn't do that initially because I figured people might get frustrated if they have a small window and try to plot horizontally and get stacked plots. Another thing to consider is that hyperspy can set a default dpi different to matplotlib if we want smaller figures. That might be a good preference to add. @hakonanes has a good method for setting that value globally which works for presentations fairly well. |
I think I would recommend merging this now, and then considering changing that in a later PR. Just to get this in :) |
Just something I noticed, not fully decided which of the two is the best solution, but I guess we could still activate stacking in a separate PR or make it an option?
That could indeed be an alternative to look in to optimize display for presentations. But in any case this horizontal plotting is really a big step forward for demos. I'll merge as soon as last test has rerun |
Window size changes when we set the dots per inch globally, e.g. |
Overview
This is an extension to #2338. It fixes the bug related to passing arguments to super that aren't in the original function. By having those arguments implicitly handled by
**kwargs
it seems to fix the problem. This is built on the work previously done by @thomasaarholtTo Do:
This also allows more custom widget views which would be useful for things like plotting multiple signals.
Example: