New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add early-termination optimization to attenuated mip #2465
Conversation
@aganders3 what do you think of my comment on the imagesc thread that the contrast limits should actually apply to the attenuated value, rather than the original data? Basically, my idea is that the attenuation should affect which voxel gets rendered from the ray, and then the mapping of that voxel value to a color gets put through the colormap. Am I thinking about this wrong? |
Yes I want to look into this. I didn't want to hijack the image.sc thread, so was going to start a Zulip chat about it but just haven't made the time yet. Have you tried it since the changes in #2417? I feel this mode is much more predictable since then but there's surely room for improvement. I still think it makes sense to scale/clamp when calculating the attenuation such that the attenuation is not dependent on the scale of your data (in particular when you have negative values this does not make sense) but I see where you're coming from as well. I'm going to try a few things. |
Maybe I'm getting it wrong, but here is a comparison of what I think you mean (using the value instead of scaled value in the output). This causes some odd artifacts. But you're right that it prevents needing to bump up the lower contrast limit as you increase attenuation. I'll have to think about that a bit to see if there's a way to get the best of both. scaled (current)unscaled |
@aganders3 Interesting, thanks for the link! Totally missed that PR. I'm convinced that I have no idea what the right thing to do is, carry on! 😂 |
Love this :)
You could do a few buffer captures at different levels of attenuation and compare the result between the two implementations to actually make sure this is true. But in principle I agree that it should be the same. |
As far as I can tell the failures are caused by the new ipython/jupyter |
Thanks for the idea - I added this to my demo script and I'm satisfied it's the same 🙂
Anything I can help with or should it just be re-run? It looks like the test used |
You could even compare the literal pixel values, they're numpy arrays after all :P That's how some of the older tests in vispy are handled in fact. But I don't think it's necessary, to be clear! |
Restarted failing job. Let's see what happens. |
Now that I think about it - the same optimization can be applied to non-attenuated MIP for any fragment that's going to be maxed out ( |
I disagree, because MIP is the default rendering that most people use. I expect that the overhead of attenuation is actually significant and you should make this optimisation! (I thought that was already in place...) |
Running the example takes considerable time before anything happens (with a 10Gb+ python process spawned), but the difference in performance with this PR is astounding. |
I mean less impactful in that a MIP with reasonable contrast limits will probably not have too many saturated pixels, where the attenuated MIP might have many that reach attenuation. In any case I added this + the same for minip. I also added a flag to skip the fine-step refinement in these cases.
Yes - sorry for the enormous example :p. The random noise likely shows the effect much more than real data, but it is a striking difference! |
Oops, 9e61444 was bad, but I think it's fixed in 3cb973a. With test data the buffer is still identical with these changes, but there could still be some difference with the depth as it's now sometimes returning on the first value above/below the threshold along the ray. I don't think this would affect the output image but might be missing something. |
I think this is actually what users would expect, to be honest? |
As far as I can tell this is ready to go in! @djhoese, should we get this in the next release? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the last minute doubt; I don't think it's warranted to add that refine
flag, unless I misunderstand how it works?
84a7901
to
fc5539a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, forgot to update the review state, LGTM!
@djhoese feel free to merge if this is good for you! |
LGTM. Thanks @aganders3! |
# Description Bump vispy to 0.13 to get some extra features. Relevant changes are: - vispy/vispy#2465, which can improve performance dramatically for `mip`-style projections in volumes by stopping the ray cast early if the max clim was already reached - a couple improvements to touchaped gestures (vispy/vispy#2456, vispy/vispy#2483) Reverted defaults of `Markers.scaling` also mean that we need to add a couple lines in napari to maintain the same behaviour ( see vispy/vispy#2359 and #5312 if you want to dive deeper). cc @aganders3 who did much of the vispy work :) --------- Co-authored-by: Matthias Bussonnier <bussonniermatthias@gmail.com> Co-authored-by: napari-bot <napari-bot@users.noreply.github.com>
Bump vispy to 0.13 to get some extra features. Relevant changes are: - vispy/vispy#2465, which can improve performance dramatically for `mip`-style projections in volumes by stopping the ray cast early if the max clim was already reached - a couple improvements to touchaped gestures (vispy/vispy#2456, vispy/vispy#2483) Reverted defaults of `Markers.scaling` also mean that we need to add a couple lines in napari to maintain the same behaviour ( see vispy/vispy#2359 and #5312 if you want to dive deeper). cc @aganders3 who did much of the vispy work :) --------- Co-authored-by: Matthias Bussonnier <bussonniermatthias@gmail.com> Co-authored-by: napari-bot <napari-bot@users.noreply.github.com>
This idea originated from a thread on image.sc about poor framerates for volume rendering napari. The idea is to exit the main raycasting loop early if there is no chance of finding a higher value based on the provided contrast limits. I believe this will not affect the final appearance at all and is only a performance optimization (and my testing supports this).
I don't have a realistic dataset where this makes a truly meaningful difference, but for now you can try this example (adapted from
examples/scene/volume.py
) with a huge cube of random data with/without this change and the performance difference should be apparent.N.B. @brisvag