-
Notifications
You must be signed in to change notification settings - Fork 441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure same name arrays do not shallow copy #2872
Conversation
Codecov Report
@@ Coverage Diff @@
## main #2872 +/- ##
==========================================
- Coverage 94.09% 94.04% -0.06%
==========================================
Files 76 76
Lines 16426 16428 +2
==========================================
- Hits 15456 15449 -7
- Misses 970 979 +9 |
Hmm, I don't think I fully understand the issue, but I guess if the change makes the segfault go away it's definitely the right call. But looking at the use case of the fix we can come up with an even more absurd test: import numpy as np
import pyvista
points = np.zeros((10000000, 3))
dataset = pyvista.PointSet(points)
data = np.arange(dataset.n_points, dtype=float)
dataset['scalars'] = data.copy()
dataset['scalars'] = dataset['scalars']
print(dataset['scalars']) I can't really put my finger on the issue, because it seems to me that in the buggy case execution went like this: pyvista/pyvista/core/datasetattributes.py Line 223 in 257f4aa
pyvista/pyvista/core/datasetattributes.py Lines 614 to 616 in 257f4aa
And here pyvista/pyvista/core/datasetattributes.py Lines 775 to 778 in 257f4aa
I would assume what's going on is that But then why does this not reproduce the segfault: import numpy as np
import pyvista
points = np.zeros((10000000, 3))
dataset = pyvista.PointSet(points)
data = np.arange(dataset.n_points, dtype=float)
dataset['scalars'] = data.copy()
# pure-pyvsta version
#dataset['scalars'] = dataset['scalars']
# VTK version
shallow = type(dataset['scalars'].VTKObject)() # create shallow copy
shallow.ShallowCopy(dataset['scalars'].VTKObject) # initialize shallow copy
shallow.SetName(dataset['scalars'].VTKObject.GetName()) # rename shallow copy
dataset.point_data.VTKObject.AddArray(shallow) # (re-)add shallow copy
dataset.point_data.VTKObject.Modified() # set to modified
print(dataset['scalars']) # no explosion |
Co-authored-by: Tetsuo Koyama <tkoyama010@gmail.com>
I spent a while trying every alternative to my proposal, and only this one did the trick.
I find that perplexing as well, especially as we're effectively following the same steps as in Recommending that we proceed and debug as later should this ever crop up again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recommending that we proceed and debug as later should this ever crop up again.
Good call, let's be pragmatic here. And let's be suspicious if someone mentions "segfault" in the near future.
Resolves #2864 by ensuring that same named arrays are not shallow copied, but rather have their reference directly returned by
_prepare_array
.For whatever reason, arrays that have been shallow copied still have their underlying arrays collected by VTK despite that they're used.
You can reproduce the memory issue (from a fresh python shell) with: