New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make point size isotropic #5582
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5582 +/- ##
==========================================
- Coverage 91.47% 91.45% -0.02%
==========================================
Files 571 571
Lines 49771 49710 -61
==========================================
- Hits 45527 45463 -64
- Misses 4244 4247 +3
|
Maybe I shoudl have given a more extreme example. If you use a scale of |
With larger image, indeed on |
It worked when setting things manually, but not once I put it in the layer initialization. Turns out, the way we initialize size in EDIT: it's worse; while |
I'm surprised I never encountered issue here... But I'm not sure how to solve. We also expose |
Should a similar change be also introduced in the context of add points in the constructor? I think also about the scenario of saving and load points via csv. |
This is tricky, and comes with all the issues that we had when discussing #4705. |
Why cannot calculate the size of the layer property and calculate it based on layer extent.steep? |
Not sure I follow... the main issue is that we're missing a |
As I understand, the problem is that Vispy ignores the scale parameter of point. |
I think we're losing track of the issue here :P The problem was initially that anisotropic scales were causing issues because they make no sense for non-deformable objects like markers. This was apparent in your issue in #2213. So I disabled that in vispy by having the Now, this revealed an extra problem, which is that not just I'm wondering if it makes sense to keep that as it it, since we can't even render anisotropic markers, so the only effect that anisotropic |
For me using |
Why not |
Notes from chatting with @Czaki:
Maybe a good heuristic is to just use the last dimension. It solves the problem now, and anisotropic scales/sizes are the only ones affected (which anyways don't work on main). We're still left with a better situation compared to #2213 because at least points don't zoom in/out when rotating. |
@andy-sweet and @mstabrin: git says you both touched this code quite a bit, so you might be interested in checking this! I'm a bit confused about the failing tests... |
This is ready to review and merge now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty much looks good and seemed to work as advertised for me. Only request is to fix/clarify the extent augmented issue.
@@ -60,20 +57,23 @@ def _on_data_change(self): | |||
|
|||
set_data = self.node._subvisuals[0].set_data | |||
|
|||
# use only last dimension to scale point sizes, see #5582 | |||
scale = self.layer.scale[-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional: my intuitive pick for this would be the root of the volume scaling factor of the transform associated with the node. But this is better than nothing, should work for common cases, and I don't feel strongly here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
root of the volume scaling factor of the transform associated with the node
You mean for all dimensions? but yeah we discussed around this before, every option has some downside, so we settled on this for now and we'll see if people complain :P
napari/layers/points/points.py
Outdated
self.size[i, :] = (self.size[i, :] > 0) * size | ||
self._clear_extent_augmented() | ||
idx = np.fromiter(self.selected_data, dtype=int) | ||
# TODO: explain why this check; only if size > 0? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a TODO for you or for someone in the future? I can't explain the need for the check either, so I'd also be down to simplify it to self.size[idx] *= size
and remove the TODO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added it because I didn't understand this check but it was already there, so I left it for the future. However, I now played around a bit, and I'm pretty sure it's wrong (we definitely want to allow points to have size 0, and nothing seems to break on the vispy side when doing so... I'm running tests now).
While remove the "todo" above, I also simplified the code in two related places as I was trying to understand how it worked. They are just refactors, so nothing should change. |
I think you forgot to push? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me now. I haven't had a chance to retest this manually since the last commit, but I assume all is good.
With vispy 0.12 (#5312) we introduced some changes to how point sizes are computed. Specifically, the `scale` is no longer used to determine the size. The initial reason was #2213. For a long explanation (with all the caveats and complications), see vispy/vispy#2453 (specifically, [this comment has a summary of the behaviour](vispy/vispy#2453 (comment))). While the current state not ideal (and we're trying to figure out over at vispy if we can solve it), this at least solves nasty problems like scale of the layer is `5` (and not `50`). This may "break" some code that relies on this transformation; in napari, one of the consequences is that the `new points layer` button on the viewer gui will now create tiny (or massive) points if the scale is different. To test, try this: ```py import napari import numpy as np v = napari.Viewer() v.add_image(np.random.rand(100, 100), scale=[10, 10]) ``` then click the "new points" button and try to annotate some points manually. On `main`, they'll be so small that they're invisible. --- This also exposed another issue (which became the main point of this PR): while point sizes are currently anisotropic, this information is not properly used in many places (visualisation being the primary); not only it's mostly unused, but it's handwaved in a few places (we arbitrarily take the average size over each dimension to determine the visualised size, we only allow setting isotropic sizes from gui and the `current_size`, and so on). Ultimately, we decided that we can't reasonably support anisotropic sizes, so we should do away with them. Most changes in this PR have to do with that. <!-- Please delete options that are not relevant. --> - [x] Bug-fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [x] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] This change requires a documentation update --------- Co-authored-by: Andy Sweet <andrew.d.sweet@gmail.com> Co-authored-by: Wouter-Michiel Vierdag <w-mv@hotmail.com> Co-authored-by: Juan Nunez-Iglesias <jni@fastmail.com>
# Description With vispy 0.12 (napari#5312) we introduced some changes to how point sizes are computed. Specifically, the `scale` is no longer used to determine the size. The initial reason was napari#2213. For a long explanation (with all the caveats and complications), see vispy/vispy#2453 (specifically, [this comment has a summary of the behaviour](vispy/vispy#2453 (comment))). While the current state not ideal (and we're trying to figure out over at vispy if we can solve it), this at least solves nasty problems like napari#2213. However, it means that a size of `10` is still `10` even if the scale of the layer is `5` (and not `50`). This may "break" some code that relies on this transformation; in napari, one of the consequences is that the `new points layer` button on the viewer gui will now create tiny (or massive) points if the scale is different. To test, try this: ```py import napari import numpy as np v = napari.Viewer() v.add_image(np.random.rand(100, 100), scale=[10, 10]) ``` then click the "new points" button and try to annotate some points manually. On `main`, they'll be so small that they're invisible. --- This also exposed another issue (which became the main point of this PR): while point sizes are currently anisotropic, this information is not properly used in many places (visualisation being the primary); not only it's mostly unused, but it's handwaved in a few places (we arbitrarily take the average size over each dimension to determine the visualised size, we only allow setting isotropic sizes from gui and the `current_size`, and so on). Ultimately, we decided that we can't reasonably support anisotropic sizes, so we should do away with them. Most changes in this PR have to do with that. ## Type of change <!-- Please delete options that are not relevant. --> - [x] Bug-fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [x] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] This change requires a documentation update --------- Co-authored-by: Andy Sweet <andrew.d.sweet@gmail.com> Co-authored-by: Wouter-Michiel Vierdag <w-mv@hotmail.com> Co-authored-by: Juan Nunez-Iglesias <jni@fastmail.com>
This pull request has been mentioned on Image.sc Forum. There might be relevant details there: |
…5802) # Description Scale layer interactivity and visualisation are currently not accounting for the layer scale. Try the following: ```py import numpy as np import napari v = napari.Viewer() il = v.add_image(np.random.rand(100, 100)) sl = v.add_shapes([[0, 0], [1, 1]], shape_type='path', scale=[100, 100], edge_width=0.1) ``` You'll see a few issues: - highlight line is not in screen space as advertised. (This is actually a problem with points as well) - the `rotate` handle is way too far because its position is also calculated not accounting for scale - interacting with the shape in any mode is basically impossible, cause coordinates do not account for the scaling of the layer. This PR basically adds a `/ layer.scale[-1]` in several places to get back into correct "sceen space". Note that I used the last dimension as we cannot do anisotropic sizes, just like we recently chose to do with points in #5582. This PR also fixes #4538 by changing the logic of how the "minimum drag" is calculated. PS: ideally we want to transition to using the `SelectionOverlay` in the future, but this is a much bigger effort. Also fixes #5752. ## Type of change <!-- Please delete options that are not relevant. --> - [x] Bug-fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] This change requires a documentation update
…5802) Scale layer interactivity and visualisation are currently not accounting for the layer scale. Try the following: ```py import numpy as np import napari v = napari.Viewer() il = v.add_image(np.random.rand(100, 100)) sl = v.add_shapes([[0, 0], [1, 1]], shape_type='path', scale=[100, 100], edge_width=0.1) ``` You'll see a few issues: - highlight line is not in screen space as advertised. (This is actually a problem with points as well) - the `rotate` handle is way too far because its position is also calculated not accounting for scale - interacting with the shape in any mode is basically impossible, cause coordinates do not account for the scaling of the layer. This PR basically adds a `/ layer.scale[-1]` in several places to get back into correct "sceen space". Note that I used the last dimension as we cannot do anisotropic sizes, just like we recently chose to do with points in #5582. This PR also fixes #4538 by changing the logic of how the "minimum drag" is calculated. PS: ideally we want to transition to using the `SelectionOverlay` in the future, but this is a much bigger effort. Also fixes #5752. <!-- Please delete options that are not relevant. --> - [x] Bug-fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] This change requires a documentation update
Description
With vispy 0.12 (#5312) we introduced some changes to how point sizes are computed. Specifically, the
scale
is no longer used to determine the size.The initial reason was #2213. For a long explanation (with all the caveats and complications), see vispy/vispy#2453 (specifically, this comment has a summary of the behaviour).
While the current state not ideal (and we're trying to figure out over at vispy if we can solve it), this at least solves nasty problems like #2213. However, it means that a size of
10
is still10
even if the scale of the layer is5
(and not50
).This may "break" some code that relies on this transformation; in napari, one of the consequences is that the
new points layer
button on the viewer gui will now create tiny (or massive) points if the scale is different.To test, try this:
then click the "new points" button and try to annotate some points manually. On
main
, they'll be so small that they're invisible.This also exposed another issue (which became the main point of this PR): while point sizes are currently anisotropic, this information is not properly used in many places (visualisation being the primary); not only it's mostly unused, but it's handwaved in a few places (we arbitrarily take the average size over each dimension to determine the visualised size, we only allow setting isotropic sizes from gui and the
current_size
, and so on).Ultimately, we decided that we can't reasonably support anisotropic sizes, so we should do away with them. Most changes in this PR have to do with that.
Type of change