Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistent treatment of strides in Dense #748

Closed
upsj opened this issue Apr 21, 2021 · 0 comments · Fixed by #774
Closed

Consistent treatment of strides in Dense #748

upsj opened this issue Apr 21, 2021 · 0 comments · Fixed by #774
Assignees
Labels
is:enhancement An improvement of an existing feature.
Milestone

Comments

@upsj
Copy link
Member

upsj commented Apr 21, 2021

I believe our treatment of strides in Dense vectors could be improved. Sometimes we ignore strides and copy padding data directly, sometimes we overwrite existing values, causing potential reallocations. And sometimes, it looks like we could save on some storage. I see two important areas where this pops up:

Matrix conversions

EnablePolymorphicAssignment implements convert_to via the assignment operator. This means that if you copy from a Dense matrix to another matrix, this operation will overwrite the stride and potentially reallocate the target, copying all padding data over. I believe we could improve our memory footprint if we reused the existing storage and didn't modify the target stride. At least, we shouldn't be touching padding data, neither reading nor writing.
The same is also true for value type conversions and (potentially) other places I can't think of right now.

Solvers

In solvers, we use Dense::create_with_config_of to create intermediate vectors of the same size and stride. This means that if our input vector has a huge stride, we will use this value for all internal vectors, potentially creating a large amount of unnecessary padding. I guess the idea behind this is that we only need to pass a single stride value to the kernels, instead of having a distinct stride for each vector? Do you think we can find a more memory-efficient solution for the worst case?

@upsj upsj added the is:enhancement An improvement of an existing feature. label Apr 21, 2021
@upsj upsj added this to the Ginkgo 1.4.0 milestone May 5, 2021
@upsj upsj added this to Short-term tasks in Ginkgo development May 12, 2021
@upsj upsj linked a pull request May 25, 2021 that will close this issue
7 tasks
@upsj upsj self-assigned this May 25, 2021
@upsj upsj moved this from Short-term tasks to Awaiting Merge in Ginkgo development Jun 2, 2021
@upsj upsj closed this as completed in #774 Jun 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
is:enhancement An improvement of an existing feature.
Projects
Ginkgo development
Awaiting Merge
Development

Successfully merging a pull request may close this issue.

1 participant