-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix splitting sharp edges with active cell scalars #2695
Conversation
Codecov Report
@@ Coverage Diff @@
## main #2695 +/- ##
==========================================
+ Coverage 93.70% 93.81% +0.10%
==========================================
Files 75 76 +1
Lines 16178 16185 +7
==========================================
+ Hits 15160 15184 +24
+ Misses 1018 1001 -17 |
This broke our docs at PyMAPDL documentation and I'd like to have this out for the next release (end of June?). Appreciate a review when anyone gets a chance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a few small changes, please check. Otherwise this PR looks good to me. That being said, two points:
- in
prepare_smooth_shading
we talk about Phong shading, but incompute_normals
we talk about Gouraud. Shouldn't these refer to the same method? I know there was a recent-ish switch from one to the other... but perhaps that was only for PBR? - right now we use
np.int32
for the original point IDs, presumably to spare memory Butnp.iinfo(np.int32).max
says this goes up to2147483647
, i.e. 2 billion points. If I estimated correctly, even with double-precision coordinates such a mesh (sans data) would amount to 48 GB of memory, which is arguably not that far-fetched these days. How about we make use of the fact that point indices are non-negative, and usenp.uint32
instead? Ornp.uint64
but I can see how double the memory for this auxiliary array can become a problem... Also cc @MatthewFlamm who usually has strong points about memory.
I think minimizing memory usage is very important when talking about adding/modifying cell or point data. I speculate that a mesh of 2 billion points is likely to be a volumetric one, e.g. |
Great points, thanks @MatthewFlamm. So the only question is whether anything can break with uint32; I expect pyvista to work seamlessly with that, question is whether VTK can do something funny with more esoteric dtypes. |
Good catch. We're using
That's a reasonable point, and as much as I'd like to improve the memory efficiency of pyvista, I don't want to stay too far from the way that VTK does things. Speaking of that, I picked Implemented that change in 8896330. |
That's fair.
For what it's worth this sounds a lot like just "numpy default int". Int the size of the word size ( |
Note: I'm skipping a test for MacOS. Now that we're running image cache verification, I expect to run into these issues more often, especially on MacOS. |
Splitting sharp edges fails for polydata when the active scalars are cells:
The fix is to simply not track cell indices since they do not change when splitting sharp edges.