Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-20.1: rowexec: release buckets from hash aggregator eagerly #47518

Merged
merged 1 commit into from
May 1, 2020

Conversation

yuzefovich
Copy link
Member

@yuzefovich yuzefovich commented Apr 15, 2020

Backport 1/2 commits from #47466.

/cc @cockroachdb/release


rowexec: release buckets from hash aggregator eagerly

This commit makes hash aggregator release the memory under buckets
eagerly (once we're done with the bucket) so that it is returned to the
system. This can matter a lot when we have large number of buckets (on
the order of 100k). Previously, this would happen only on the flow
shutdown, once we're losing the references to hashAggregator
processor. But it was problematic - we "released" the associated memory
from the memory accounting, yet we were holding the references still.
With this commit we will reduce the memory footprint and we'll be a lot
closer to what our memory accounting thinks we're using.

Fixes: #47205.

Release note (bug fix): Previously, CockroachDB was incorrectly
releasing memory used by hash aggregation (we were releasing the correct
amount from the internal memory accounting system but, by mistake, were
keeping the references to the actual memory for some time which
prohibited the memory to be garbage collected). This could lead to
a crash (which was more likely when hash aggregation had store on the
order of 100k of groups) and is now fixed.

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@yuzefovich
Copy link
Member Author

I think I would call it a soft release blocker and would like to include it in 20.1.0 release.

@asubiotto
Copy link
Contributor

I think this is probably going to have to wait until 20.1.0 is out the door. This issue does cause a server crash, but only in cases where there is an aggregation with a very large number of groups (seen with 500k distinct groups). I think this is enough of an edge case that we want to hold off since I think we're at the point where we're only merging stuff we would consider delaying the release for.

@yuzefovich
Copy link
Member Author

SGTM.

@yuzefovich yuzefovich added the do-not-merge bors won't merge a PR with this label. label Apr 15, 2020
This commit makes hash aggregator release the memory under buckets
eagerly (once we're done with the bucket) so that it is returned to the
system. This can matter a lot when we have large number of buckets (on
the order of 100k). Previously, this would happen only on the flow
shutdown, once we're losing the references to `hashAggregator`
processor. But it was problematic - we "released" the associated memory
from the memory accounting, yet we were holding the references still.
With this commit we will reduce the memory footprint and we'll be a lot
closer to what our memory accounting thinks we're using.

Release note (bug fix): Previously, CockroachDB was incorrectly
releasing memory used by hash aggregation (we were releasing the correct
amount from the internal memory accounting system but, by mistake, were
keeping the references to the actual memory for some time which
prohibited the memory to be garbage collected). This could lead to
a crash (which was more likely when hash aggregation had store on the
order of 100k of groups) and is now fixed.
@yuzefovich yuzefovich removed the do-not-merge bors won't merge a PR with this label. label May 1, 2020
@yuzefovich yuzefovich merged commit d9f3e9b into cockroachdb:release-20.1 May 1, 2020
@yuzefovich yuzefovich deleted the backport20.1-47466 branch May 1, 2020 16:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants