Skip to content

reset_memory -> restore_original_nonzero_pattern#4115

Merged
lindsayad merged 6 commits intolibMesh:develfrom
lindsayad:implement-reset-memory-for-diagonal-matrix
Mar 27, 2025
Merged

reset_memory -> restore_original_nonzero_pattern#4115
lindsayad merged 6 commits intolibMesh:develfrom
lindsayad:implement-reset-memory-for-diagonal-matrix

Conversation

@lindsayad
Copy link
Copy Markdown
Member

I also added a pretty important comment to the doxygen string. This reflects what happens in the PETSc implementation. I now have some concern about the method name ... a name like reset_memory makes me think that there's no way something like a no-op could happen. Perhaps a name like restore_(original_)nonzero_pattern would be better? Would this still conceptually allow its use for hash table based memory allocation? I think so since resetting/clearing the hash table restores the original sparsity pattern ... that pattern being whatever you want it to be

@lindsayad lindsayad changed the title Add a reset_memory implemenntation for DiagonalMatrix Add a reset_memory implementation for DiagonalMatrix Mar 24, 2025
@moosebuild
Copy link
Copy Markdown

moosebuild commented Mar 24, 2025

Job Coverage, step Generate coverage on 3b194d9 wanted to post the following:

Coverage

047a4c #4115 3b194d
Total Total +/- New
Rate 63.32% 63.32% -0.00% 0.00%
Hits 74333 74333 - 0
Misses 43051 43056 +5 7

Diff coverage report

Full coverage report

Warnings

  • New new line coverage rate 0.00% is less than the suggested 90.0%

This comment will be updated on new commits.

@jwpeterson
Copy link
Copy Markdown
Member

Perhaps a name like restore_(original_)nonzero_pattern would be better?

Definitely sounds better to me... in my mind for a SparseMatrix there's the memory associated with the sparsity pattern, and the memory associated with the matrix "values" (although this might not actually match reality) so I would find a reset_memory() API a bit ambiguous on first glance, whereas your other suggestion is not ambiguous at all.

@lindsayad lindsayad changed the title Add a reset_memory implementation for DiagonalMatrix [WIP] reset_memory -> restore_original_nonzero_pattern Mar 25, 2025
@lindsayad lindsayad marked this pull request as draft March 25, 2025 18:55
@lindsayad
Copy link
Copy Markdown
Member Author

Will do. I'm converting this to draft because there are upstream changes to PETSc that I need to make for the work here

I also added a pretty important comment to the doxygen string. This
reflects what happens in the PETSc implementation. I know have some
concern about the method name ... a name like reset_memory makes me
think that there's no way something like a no-op could happen. Perhaps
a name like restore_(original_)nonzero_pattern would be better? Would
this still conceptually allow its use for hash table based memory
allocation? I think so since resetting/clearing the hash table restores
the original sparsity pattern ... that pattern being whatever you
want it to be
@lindsayad lindsayad force-pushed the implement-reset-memory-for-diagonal-matrix branch from 733f203 to 3b194d9 Compare March 26, 2025 22:35
@lindsayad lindsayad changed the title [WIP] reset_memory -> restore_original_nonzero_pattern reset_memory -> restore_original_nonzero_pattern Mar 26, 2025
@lindsayad
Copy link
Copy Markdown
Member Author

The Mac ARM test target is pretty obnoxious

@lindsayad lindsayad marked this pull request as ready for review March 26, 2025 23:45
@lindsayad
Copy link
Copy Markdown
Member Author

We are not going to change the API in PETSc so I can remove the WIP draft label from this. Instead we are going to honor the documentation of MatResetPreallocation. https://gitlab.com/petsc/petsc/-/merge_requests/8236

@lindsayad lindsayad merged commit 358b251 into libMesh:devel Mar 27, 2025
19 of 20 checks passed
@lindsayad lindsayad deleted the implement-reset-memory-for-diagonal-matrix branch March 27, 2025 17:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants