Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BlockSparseMatrix might benefit from caching column-block partition #933

Open
sandwichmaker opened this issue Dec 18, 2022 · 1 comment
Open

Comments

@sandwichmaker
Copy link
Contributor

sandwichmaker commented Dec 18, 2022

There is a design issue as to who should cache this partitioning, but one place where this could be done for sure is inside CgnrLinearOperator, since within its lifetime, the block sparsity structure is guaranteed to remain constant. This should have a nice effect on the performance of CGNR when calling LeftMultiplyAndAccumulate.

Other methods that may benefit from this are:
SquareColumnNorm
ScaleColumns

But the frequency with which they are called is much smaller.

@sandwichmaker
Copy link
Contributor Author

cc: @DmitriyKorchemkin

@sandwichmaker sandwichmaker changed the title BlockSparseMatrix::LeftMultiplyAndAccumulate might benefit from caching column-block partition BlockSparseMatrix might benefit from caching column-block partition Dec 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant