Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix aggregation memory revoke when operator is finishing #164

Merged
merged 5 commits into from Feb 6, 2019

Conversation

sopel39
Copy link
Member

@sopel39 sopel39 commented Feb 5, 2019

No description provided.

@sopel39 sopel39 added the bug Something isn't working label Feb 5, 2019
@sopel39 sopel39 requested a review from findepi February 5, 2019 19:36
@cla-bot cla-bot bot added the cla-signed label Feb 5, 2019
Copy link
Member

@findepi findepi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good. Please update code as fixups for easier re-review.

.addSequencePage(10, 100, 0, 300, 0, 500)
.addSequencePage(numberOfRows, 100, 0, 100000, 0, 500)
.addSequencePage(numberOfRows, 100, 0, 200000, 0, 500)
.addSequencePage(numberOfRows, 100, 0, 300000, 0, 500)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 300_000


MaterializedResult expected = expectedBuilder.build();
MaterializedResult actual = toMaterializedResult(driverContext.getSession(), expected.getTypes(), pages);
assertEqualsIgnoreOrder(actual.getMaterializedRows(), expected.getMaterializedRows());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get it. Previously the code was calling assertOperatorEqualsIgnoreOrder
Now it's more verbose, including manually calling toPages, dropping hash channels, converting to MaterializedResult.
i assume this is assertOperatorEqualsIgnoreOrder inlined, but what's the actual difference?

Is this really related to increasing the data set, as Produce more than single page in testHashAggregation commit message says?

// have a proper way to atomically convert memory reservations
localRevocableMemoryContext.setBytes(currentRevocableBytes);
// spill since revocable memory could not be converted to user memory immediately
getFutureValue(spillToDisk());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can take time.
Can we spill asynchronously?

spillToDisk();
return WorkProcessor.empty();

(then we need to updateMemory(); on next call..)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added todo since this requires bigger operator refactor

@sopel39
Copy link
Member Author

sopel39 commented Feb 6, 2019

ac

Copy link
Member

@findepi findepi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit

@sopel39 sopel39 merged commit 79744f2 into trinodb:master Feb 6, 2019
@sopel39 sopel39 deleted the ks/hash_agg_recok branch February 6, 2019 12:41
rice668 pushed a commit to rice668/trino that referenced this pull request Jan 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working cla-signed
Development

Successfully merging this pull request may close these issues.

None yet

2 participants