New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage compiling `keccak` benchmark #54208

Open
nnethercote opened this Issue Sep 13, 2018 · 9 comments

Comments

Projects
None yet
5 participants
@nnethercote
Contributor

nnethercote commented Sep 13, 2018

According to perf.rust-lang.org, a "Clean" build of keccak-check has a max-rss of 637 MB. Here's a Massif profile of the heap memory usage.
keccak-clean

The spike is due to a single allocation of 500,363,244 bytes here:

users: vec![invalid_users(); num_live_nodes * num_vars],

Each vector element is a Users, which is a three field struct taking up 12 bytes. num_live_nodes is 16,371, and num_vars is 2,547, and 12 * 16,371 * 2,547 = 500,363,244.

I have one idea to improve this: Users is a triple contains two u32s and a bool, which means that it is 96 bytes even though it only contains 65 bytes of data. If we split it up so we have 3 vectors instead of a vector of triples, we'd end up with 4 * 16,371 * 2,547 + 4 * 16,371 * 2,547 + 1 * 16,371 * 2,547 = 375,272,433, which is a reduction of 125,090,811 bytes. This would get max-rss down from 637MB to 512MB, a reduction of 20%.

Alternatively, if we packed the bools into a bitset we could get it down to 338,787,613 bytes, which is a reduction of 161,575,631 bytes. This would get max-rss down from 637MB to 476MB, a reduction of 25%. But it might slow things down... depends if the improved locality is outweighed by the extra instructions needs for bit manipulations.

@nikomatsakis: do you have any ideas for improving this on the algorithmic side? Is this dense num_live_nodes * num_vars representation avoidable?

@nnethercote

This comment has been minimized.

Contributor

nnethercote commented Sep 13, 2018

Now for NLL. According to perf.rust-lang.org, an "Nll" build of keccak-clean has a max-rss of 1239MB. Here's a Massif profile of the heap usage:
keccak-nll
The 500MB Liveness spike is still visible, but it is outweighed by a later plateau that is dominated by three allocation sites that allocate 308,588,896 and 308,588,896 and 270,743,088 bytes respectively.

The three allocation sites are here:

let mut flow_inits = FlowAtLocation::new(do_dataflow(
tcx,
mir,
id,
&attributes,
&dead_unwinds,
MaybeInitializedPlaces::new(tcx, mir, &mdpe),
|bd, i| DebugFormatted::new(&bd.move_data().move_paths[i]),
));
let flow_uninits = FlowAtLocation::new(do_dataflow(
tcx,
mir,
id,
&attributes,
&dead_unwinds,
MaybeUninitializedPlaces::new(tcx, mir, &mdpe),
|bd, i| DebugFormatted::new(&bd.move_data().move_paths[i]),
));
let flow_ever_inits = FlowAtLocation::new(do_dataflow(
tcx,
mir,
id,
&attributes,
&dead_unwinds,
EverInitializedPlaces::new(tcx, mir, &mdpe),
|bd, i| DebugFormatted::new(&bd.move_data().inits[i]),
));

Each do_dataflow() call ends up here:

vec![IdxSet::new_empty(bits_per_block); num_blocks]

In each case num_blocks is 25,994, and bits_per_block is 94,972 in the first two and 83,308 in the third.

I tried changing on_entry_sets to a HybridIdxSet but it didn't help, because it turns out these bitsets end up with many bits sets, so HybridIdxSet just switches to the dense representation anyway.

One trivial idea: it looks like flow_inits doesn't need to be live at the same time as flow_uninits and flow_ever_inits. If that's right, we should be able to reduce the peak by 308MB, from 1239MB to 931MB, a 25% reduction.

@nikomatsakis: any other thoughts here from the algorithmic side?

@nnethercote

This comment has been minimized.

Contributor

nnethercote commented Sep 14, 2018

I have one idea to improve this: Users is a triple contains two u32s and a bool, which means that it is 96 bytes even though it only contains 65 bytes of data. If we split it up so we have 3 vectors instead of a vector of triples,

I have implemented this in #54211.

@nnethercote

This comment has been minimized.

Contributor

nnethercote commented Sep 14, 2018

One trivial idea: it looks like flow_inits doesn't need to be live at the same time as flow_uninits and flow_ever_inits. If that's right, we should be able to reduce the peak by 308MB, from 1239MB to 931MB, a 25% reduction.

I have implemented this in #54213.

@nnethercote

This comment has been minimized.

Contributor

nnethercote commented Sep 21, 2018

#54420 improves the non-NLL case some more.

@nnethercote

This comment has been minimized.

Contributor

nnethercote commented Sep 24, 2018

#54420 improves the non-NLL case some more.

Because of this, the NLL:non-NLL ratio for max-rss has worsened, to 269%, i.e. NLL uses 2.69x more memory.

@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Sep 25, 2018

@nnethercote two questions:

  1. Are you still investigating here?
  2. Do you know how the memory usage is distributed between the various dataflow computations?
@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Sep 25, 2018

I guess this answers my question:

In each case num_blocks is 25,994, and bits_per_block is 94,972 in the first two and 83,308 in the third.

@nnethercote

This comment has been minimized.

Contributor

nnethercote commented Sep 25, 2018

@nikomatsakis: I have run out of ideas on this one.

If it helps, here is what the flow_uninits looks like, where each line represents a row (i.e. a BB) and shows the number of set bits plus the total number of bits (which is rounded up to the nearest multiple of 64, because BitSet uses 64-bit words):

94971 / 94976
94968 / 94976
94968 / 94976
94966 / 94976
94964 / 94976
...
49547 / 94976
49543 / 94976
49542 / 94976
49540 / 94976
49537 / 94976

In other words, it is 25994 x 94976 bits (308.6MB), and the rows start off almost entirely set, and by the end drop down to about half set. About 75% of the bits are set.

And here's what flow_ever_inits looks like:

1 / 83328
79728 / 83328
5 / 83328
8 / 83328
9 / 83328
...
64142 / 83328
64146 / 83328
64148 / 83328
64151 / 83328
64155 / 83328

It is 25994 x 83328 bits (270.8MB). Apart from the second row, the rows start of almost empty and get fuller until they are 77% full by the end. About 38% of the bits are set.

I didn't look at flow_inits because its lifetime is separate and so it's no longer part of the memory peak.

I can't see how to represent this data more compactly, and I don't understand the algorithm in enough detail to know if less data could be stored. I also looked into separating the lifetimes of the two structures but they are used in tandem, as far as I can tell.

@pnkfelix

This comment has been minimized.

Member

pnkfelix commented Oct 2, 2018

Discussed with @nikomatsakis during triage of NLL issues.

We decided that the memory usage on this case should not block NLL's inclusion in RC2.

In terms of whether to put this on the Release milestone or not, we decided that it would be a better idea, at least in the short-to-middle term, to focus effort more on Polonius, since that component might end up replacing the dataflow entirely, and thus the pay-off from optimizing rustc_mir::dataflow may not be so great.

So, tagging as NLL-deferred, with the intention of revisiting after we've learned more about what we plan to do with Polonius, if anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment