-
-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume rendering updates for isosurface and attenuated MIP #5215
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5215 +/- ##
==========================================
- Coverage 89.00% 88.83% -0.17%
==========================================
Files 578 579 +1
Lines 48938 49125 +187
==========================================
+ Hits 43557 43642 +85
- Misses 5381 5483 +102
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Nice, thanks for the PR! I have a few comments:
Either way I would be ok with merging this as is as well :) |
Thanks! Great thoughts. I will iterate on this. Point 1 makes complete sense - I will play with this. I think I just didn't understand the For point 2 I definitely understand the motivation. I guess the goal would be to surface the actual binarization value the user might want regardless of data type? |
I see... Yeah, I'm not sure what's the best solution here. In theory, if
This should show anything between 0.7 and 0.9. But if you use the
I think the range is more intuitive... which means we need to somehow get the normalized range (which I suppose is already happening to set the slider? And if not, we got another thing to solve :P)
Yes, I find this quite important; I think all of this is pointing in the direction of changing the isosurface limits for the slider, rather than changing the valye programmatically once it's set. |
f36b517
to
00121ce
Compare
Thanks for the review and comments. I took another crack at this if you have time to look again. Now the I also think the |
vec3 max_loc_tex = vec3(0.0); // Location where the maximum value was encountered | ||
""", | ||
in_loop=""" | ||
scaled_sumval += (val - clim.x) / (clim.y - clim.x); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the relevant change to the shader, otherwise it's just copied in from vispy. Previously it was just sumval = sumval + val;
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! We should probably upstream this to vispy as well, so we can drop the vendoring here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think about it, does this mean we can/should fix the isosurface shader in the same way, instead of doing stuff in python?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question that I also didn't think about until now for some reason. I actually haven't looked at the iso shader so it's worth checking it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah taking a look at the in_loop
portion of the isosurface shader, it seems like it's assuming something about the scale of the data with this line:
if (val > u_threshold-0.2) {
...
I'll try some similar changes there - it would simplify the code in napari and unify the iso and attenuated_mip implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it seems this might not work. Only the contrast limits–not the range–are passed to the shader, so I can't get the same scaling in the shader without more changes to vispy (and I'm not sure making those changes would make sense).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see... Yeah maybe this is overcomplicating things unnecessarily :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Only thing I suggest is to switch everything to using range
rather than the clim itself.
vec3 max_loc_tex = vec3(0.0); // Location where the maximum value was encountered | ||
""", | ||
in_loop=""" | ||
scaled_sumval += (val - clim.x) / (clim.y - clim.x); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! We should probably upstream this to vispy as well, so we can drop the vendoring here.
deb25b3
to
066173c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look great, though I'm not sure I know what those test failures mean... the pyside6 one maybe will be fixed by #5244, but the headless test looks odd.
I left a comment suggesting a change, but I'm happy with the current state as well if that is not possible/too hard.
vec3 max_loc_tex = vec3(0.0); // Location where the maximum value was encountered | ||
""", | ||
in_loop=""" | ||
scaled_sumval += (val - clim.x) / (clim.y - clim.x); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think about it, does this mean we can/should fix the isosurface shader in the same way, instead of doing stuff in python?
Hm - I'll also look into the headless failure when I have a chance. Thanks for all your time on this! |
I can't reproduce the |
dc55ee1
to
af27d4b
Compare
…contrast * Address review comments
af27d4b
to
cb28d03
Compare
This is back to green after rebasing on main. I consider this ready to go but will be happy to make further updates if there are more suggestions. |
Ah, this is a known old issue... Well, we don't need to solve it here :) As far as I'm concerned, this can be merged! Will leave it open for a day or so to see if we get other feedback. |
Merged! Thanks a lot @aganders3, also for upstreaming what possible to vispy :) |
Thanks for all your time reviewing and providing feedback! |
Description
This is meant to fix some ugly volume rendering issues with certain data types when using
iso
orattenuated_mip
modes. In my investigation it seemed to mostly be a result of certain data types not being normalized in the GPU memory when usingtexture_format = 'auto'
.Here's what it can look like now with 16 bit signed ints (from a directory of DICOM files):
Screen.Recording.2022-10-13.at.3.12.58.PM.mov
With these changes, this is how it behaves:
Screen.Recording.2022-10-13.at.8.11.07.PM.mov
Type of change
References
Discussion in Zulip
Visual Human Project CT Data
How has this been tested?
Tested with some of the sample data that comes with Napari (e.g. Brain (3D)) and some CT data from the Visual Human Project. I would appreciate others testing this as it seems like "tuning" parameters and I don't have a huge collection of relevant data. Suggestions from the Zulip stream were split into separate commits for easy testing.
Any input on possible automated tests is also appreciated.
Final checklist: