Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add early-termination optimization to attenuated mip #2465

Merged
merged 4 commits into from Apr 24, 2023

Conversation

aganders3
Copy link
Contributor

@aganders3 aganders3 commented Mar 21, 2023

This idea originated from a thread on image.sc about poor framerates for volume rendering napari. The idea is to exit the main raycasting loop early if there is no chance of finding a higher value based on the provided contrast limits. I believe this will not affect the final appearance at all and is only a performance optimization (and my testing supports this).

I don't have a realistic dataset where this makes a truly meaningful difference, but for now you can try this example (adapted from examples/scene/volume.py) with a huge cube of random data with/without this change and the performance difference should be apparent.

import numpy as np

from vispy import app, scene

# Prepare canvas
canvas = scene.SceneCanvas(size=(800, 600), show=True)
canvas.measure_fps()

# Set up a viewbox to display the image with interactive pan/zoom
view = canvas.central_widget.add_view()

# Create the volume visual
volume = scene.visuals.Volume(
    np.random.random((1024, 1024, 1024)).astype(np.float32),
    parent=view.scene,
    attenuation=0.05,
    method="attenuated_mip",
)

view.camera = scene.cameras.TurntableCamera(
    parent=view.scene,
    fov=60,
    name='Turntable',
)


if __name__ == '__main__':
    app.run()

N.B. @brisvag

@jni
Copy link

jni commented Mar 22, 2023

@aganders3 what do you think of my comment on the imagesc thread that the contrast limits should actually apply to the attenuated value, rather than the original data? Basically, my idea is that the attenuation should affect which voxel gets rendered from the ray, and then the mapping of that voxel value to a color gets put through the colormap. Am I thinking about this wrong?

@aganders3
Copy link
Contributor Author

aganders3 commented Mar 22, 2023

@aganders3 what do you think of my comment on the imagesc thread that the contrast limits should actually apply to the attenuated value, rather than the original data? Basically, my idea is that the attenuation should affect which voxel gets rendered from the ray, and then the mapping of that voxel value to a color gets put through the colormap. Am I thinking about this wrong?

Yes I want to look into this. I didn't want to hijack the image.sc thread, so was going to start a Zulip chat about it but just haven't made the time yet.

Have you tried it since the changes in #2417? I feel this mode is much more predictable since then but there's surely room for improvement. I still think it makes sense to scale/clamp when calculating the attenuation such that the attenuation is not dependent on the scale of your data (in particular when you have negative values this does not make sense) but I see where you're coming from as well. I'm going to try a few things.

@aganders3
Copy link
Contributor Author

aganders3 commented Mar 22, 2023

Maybe I'm getting it wrong, but here is a comparison of what I think you mean (using the value instead of scaled value in the output). This causes some odd artifacts. But you're right that it prevents needing to bump up the lower contrast limit as you increase attenuation. I'll have to think about that a bit to see if there's a way to get the best of both.

scaled (current)

scaled

unscaled

unscaled

@jni
Copy link

jni commented Mar 23, 2023

@aganders3 Interesting, thanks for the link! Totally missed that PR. I'm convinced that I have no idea what the right thing to do is, carry on! 😂

@brisvag
Copy link
Collaborator

brisvag commented Mar 23, 2023

Love this :)

I believe this will not affect the final appearance at all and is only a performance optimization (and my testing supports this).

You could do a few buffer captures at different levels of attenuation and compare the result between the two implementations to actually make sure this is true. But in principle I agree that it should be the same.

@djhoese
Copy link
Member

djhoese commented Mar 23, 2023

As far as I can tell the failures are caused by the new ipython/jupyter comm package being released and/or used before it was ready (NotImplementedError). The code has changed quite a bit in the last 24 hours or so: https://github.com/ipython/comm

@aganders3
Copy link
Contributor Author

You could do a few buffer captures at different levels of attenuation and compare the result between the two implementations to actually make sure this is true. But in principle I agree that it should be the same.

Thanks for the idea - I added this to my demo script and I'm satisfied it's the same 🙂

As far as I can tell the failures are caused by the new ipython/jupyter comm package being released and/or used before it was ready (NotImplementedError). The code has changed quite a bit in the last 24 hours or so: https://github.com/ipython/comm

Anything I can help with or should it just be re-run? It looks like the test used comm=0.1.2 and now 0.1.3 is released and has a relevant fix: ipython/comm#13

@brisvag
Copy link
Collaborator

brisvag commented Mar 23, 2023

Thanks for the idea - I added this to my demo script and I'm satisfied it's the same slightly_smiling_face

You could even compare the literal pixel values, they're numpy arrays after all :P That's how some of the older tests in vispy are handled in fact. But I don't think it's necessary, to be clear!

@djhoese
Copy link
Member

djhoese commented Mar 23, 2023

Restarted failing job. Let's see what happens.

@aganders3
Copy link
Contributor Author

Now that I think about it - the same optimization can be applied to non-attenuated MIP for any fragment that's going to be maxed out (maxval >= clim.y). This is probably less impactful, but may still be worth including. I guess it's already in place here if you set attenuation = 0 to get a standard MIP, so that gives a relative simple way to roughly compare performance (with the overhead that attenuation is still calculated unnecessarily for each step).

@jni
Copy link

jni commented Mar 25, 2023

This is probably less impactful,

I disagree, because MIP is the default rendering that most people use. I expect that the overhead of attenuation is actually significant and you should make this optimisation! (I thought that was already in place...)

@psobolewskiPhD
Copy link
Contributor

Running the example takes considerable time before anything happens (with a 10Gb+ python process spawned), but the difference in performance with this PR is astounding.
Without I get max 4.9 fps, with 60 fps all the time.

@aganders3
Copy link
Contributor Author

I disagree, because MIP is the default rendering that most people use. I expect that the overhead of attenuation is actually significant and you should make this optimisation! (I thought that was already in place...)

I mean less impactful in that a MIP with reasonable contrast limits will probably not have too many saturated pixels, where the attenuated MIP might have many that reach attenuation. In any case I added this + the same for minip. I also added a flag to skip the fine-step refinement in these cases.

Running the example takes considerable time before anything happens (with a 10Gb+ python process spawned), but the difference in performance with this PR is astounding.
Without I get max 4.9 fps, with 60 fps all the time.

Yes - sorry for the enormous example :p. The random noise likely shows the effect much more than real data, but it is a striking difference!

@aganders3
Copy link
Contributor Author

Oops, 9e61444 was bad, but I think it's fixed in 3cb973a. With test data the buffer is still identical with these changes, but there could still be some difference with the depth as it's now sometimes returning on the first value above/below the threshold along the ray. I don't think this would affect the output image but might be missing something.

@jni
Copy link

jni commented Mar 28, 2023

there could still be some difference with the depth as it's now sometimes returning on the first value above/below the threshold along the ray.

I think this is actually what users would expect, to be honest?

@brisvag
Copy link
Collaborator

brisvag commented Apr 18, 2023

As far as I can tell this is ready to go in! @djhoese, should we get this in the next release?

Copy link
Collaborator

@brisvag brisvag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the last minute doubt; I don't think it's warranted to add that refine flag, unless I misunderstand how it works?

vispy/visuals/volume.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@brisvag brisvag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, forgot to update the review state, LGTM!

@brisvag
Copy link
Collaborator

brisvag commented Apr 24, 2023

@djhoese feel free to merge if this is good for you!

@djhoese
Copy link
Member

djhoese commented Apr 24, 2023

LGTM. Thanks @aganders3!

@djhoese djhoese merged commit adc14c3 into vispy:main Apr 24, 2023
15 checks passed
@aganders3 aganders3 deleted the attenuated-mip-optimization branch April 24, 2023 15:08
brisvag added a commit to napari/napari that referenced this pull request Jul 20, 2023
# Description
Bump vispy to 0.13 to get some extra features. Relevant changes are:

- vispy/vispy#2465, which can improve performance dramatically for
`mip`-style projections in volumes by stopping the ray cast early if the
max clim was already reached
- a couple improvements to touchaped gestures (vispy/vispy#2456,
vispy/vispy#2483)

Reverted defaults of `Markers.scaling` also mean that we need to add a
couple lines in napari to maintain the same behaviour ( see
vispy/vispy#2359 and #5312 if you want to dive deeper).

cc @aganders3 who did much of the vispy work :)

---------

Co-authored-by: Matthias Bussonnier <bussonniermatthias@gmail.com>
Co-authored-by: napari-bot <napari-bot@users.noreply.github.com>
Czaki pushed a commit to napari/napari that referenced this pull request Oct 17, 2023
Bump vispy to 0.13 to get some extra features. Relevant changes are:

- vispy/vispy#2465, which can improve performance dramatically for
`mip`-style projections in volumes by stopping the ray cast early if the
max clim was already reached
- a couple improvements to touchaped gestures (vispy/vispy#2456,
vispy/vispy#2483)

Reverted defaults of `Markers.scaling` also mean that we need to add a
couple lines in napari to maintain the same behaviour ( see
vispy/vispy#2359 and #5312 if you want to dive deeper).

cc @aganders3 who did much of the vispy work :)

---------

Co-authored-by: Matthias Bussonnier <bussonniermatthias@gmail.com>
Co-authored-by: napari-bot <napari-bot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants