New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shrink Expr_::ExprInlineAsm. #37445

Merged
merged 1 commit into from Oct 30, 2016

Conversation

Projects
None yet
4 participants
@nnethercote
Contributor

nnethercote commented Oct 28, 2016

On 64-bit this reduces the size of Expr_ from 144 to 64 bytes, and
reduces the size of Expr from 176 to 96 bytes.

For the workload in #36799 this reduces the RSS for the "lowering ast -> hir" phase and all subsequent phases by 50 MiB, which reduces the peak RSS for that workload by about 1%. Not huge, but it's a very easy improvement.

r? @eddyb

Shrink Expr_::ExprInlineAsm.
On 64-bit this reduces the size of `Expr_` from 144 to 64 bytes, and
reduces the size of `Expr` from 176 to 96 bytes.
@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Oct 28, 2016

Member

cc @jseyfried Doing the same for the AST might speed up expansion.

@bors r+

Member

eddyb commented Oct 28, 2016

cc @jseyfried Doing the same for the AST might speed up expansion.

@bors r+

@bors

This comment has been minimized.

Show comment
Hide comment
@bors

bors Oct 28, 2016

Contributor

📌 Commit a920e35 has been approved by eddyb

Contributor

bors commented Oct 28, 2016

📌 Commit a920e35 has been approved by eddyb

@bors

This comment has been minimized.

Show comment
Hide comment
@bors

bors Oct 30, 2016

Contributor

⌛️ Testing commit a920e35 with merge f5a702d...

Contributor

bors commented Oct 30, 2016

⌛️ Testing commit a920e35 with merge f5a702d...

bors added a commit that referenced this pull request Oct 30, 2016

Auto merge of #37445 - nnethercote:shrink-Expr_, r=eddyb
Shrink Expr_::ExprInlineAsm.

On 64-bit this reduces the size of `Expr_` from 144 to 64 bytes, and
reduces the size of `Expr` from 176 to 96 bytes.

For the workload in #36799 this reduces the RSS for the "lowering ast -> hir" phase and all subsequent phases by 50 MiB, which reduces the peak RSS for that workload by about 1%. Not huge, but it's a very easy improvement.

r? @eddyb

@bors bors merged commit a920e35 into rust-lang:master Oct 30, 2016

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
homu Test successful
Details
@nnethercote

This comment has been minimized.

Show comment
Hide comment
@nnethercote

nnethercote Oct 31, 2016

Contributor

For the workload in #36799 this reduces the RSS for the "lowering ast -> hir" phase and all subsequent phases by 50 MiB, which reduces the peak RSS for that workload by about 1%. Not huge, but it's a very easy improvement.

I did some more measurements because the above improvement was smaller than expected. Massif showed that this change does make a decent difference. Here are measurements for a cut down version of ostn15_phf that peaks at ~1 GiB instead of ~4.5 GiB. Before:

98.81% (1,053,399,475B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->23.11% (246,413,376B) 0x7E07CE6: rustc::hir::lowering::LoweringContext::lower_expr (heap.rs:59)
| ->09.91% (105,601,584B) 0x7DC80AD: <collections::vec::Vec<T> as core::iter::traits::FromIterator<T>>::from_iter (lowering.rs:1036)

After:

98.69% (949,710,821B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->13.97% (134,407,296B) 0x7E09522: rustc::hir::lowering::LoweringContext::lower_expr (heap.rs:59)
| ->05.99% (57,600,864B) 0x7DC9B9E: <collections::vec::Vec<T> as core::iter::traits::FromIterator<T>>::from_iter (lowering.rs:1036)

The Expr allocations are reduced in size by almost 50%, as expected. Good.

I then did some RSS measurements of stage2 compilers via /usr/bin/time. They jump around a bit, but if I take the lowest of 3 runs, the before vs. after improvement is 1108 MB -> 1004 MB, which is a 9% drop. Good.

In conclusion, I'm not sure what went wrong with my initial measurements, but I'm confident now that this reduces memory usage for ostn15_phf by about 9%.

Contributor

nnethercote commented Oct 31, 2016

For the workload in #36799 this reduces the RSS for the "lowering ast -> hir" phase and all subsequent phases by 50 MiB, which reduces the peak RSS for that workload by about 1%. Not huge, but it's a very easy improvement.

I did some more measurements because the above improvement was smaller than expected. Massif showed that this change does make a decent difference. Here are measurements for a cut down version of ostn15_phf that peaks at ~1 GiB instead of ~4.5 GiB. Before:

98.81% (1,053,399,475B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->23.11% (246,413,376B) 0x7E07CE6: rustc::hir::lowering::LoweringContext::lower_expr (heap.rs:59)
| ->09.91% (105,601,584B) 0x7DC80AD: <collections::vec::Vec<T> as core::iter::traits::FromIterator<T>>::from_iter (lowering.rs:1036)

After:

98.69% (949,710,821B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->13.97% (134,407,296B) 0x7E09522: rustc::hir::lowering::LoweringContext::lower_expr (heap.rs:59)
| ->05.99% (57,600,864B) 0x7DC9B9E: <collections::vec::Vec<T> as core::iter::traits::FromIterator<T>>::from_iter (lowering.rs:1036)

The Expr allocations are reduced in size by almost 50%, as expected. Good.

I then did some RSS measurements of stage2 compilers via /usr/bin/time. They jump around a bit, but if I take the lowest of 3 runs, the before vs. after improvement is 1108 MB -> 1004 MB, which is a 9% drop. Good.

In conclusion, I'm not sure what went wrong with my initial measurements, but I'm confident now that this reduces memory usage for ostn15_phf by about 9%.

@nnethercote nnethercote deleted the nnethercote:shrink-Expr_ branch Oct 31, 2016

@brson brson added the relnotes label Oct 31, 2016

@brson

This comment has been minimized.

Show comment
Hide comment
@brson

brson Oct 31, 2016

Contributor

Nice wins.

Contributor

brson commented Oct 31, 2016

Nice wins.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment