Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.20.0-beta.1 regression in memory usage #43789

Closed
carols10cents opened this issue Aug 10, 2017 · 19 comments
Closed

1.20.0-beta.1 regression in memory usage #43789

carols10cents opened this issue Aug 10, 2017 · 19 comments
Labels
C-bug Category: This is a bug. I-compilemem Issue: Problems and improvements with respect to memory usage during compilation. regression-from-stable-to-stable Performance or correctness regression from one stable version to another.

Comments

@carols10cents
Copy link
Member

This is going to be vague; please feel free to close if this is covered by more specific issues (I wasn't sure) or if this is too vague to be useful.

I've seen intermittent failures with Rust 1.20.0-beta.1 (and sometimes nightly as well, but never stable) on travis when testing crates.io. The failures appear to be because it has run out of memory; this job in particular failed with:

= note: /usr/bin/ld: final link failed: Cannot allocate memory

Other samples:

It seems to happen more often on this PR. The particular job linked above is crates.io master plus the tar crate added as a dependency.

Changing the travis config to sudo: required (which apparently gives you more memory) gets that PR to pass on beta, so I'm not particularly blocked, but just wanted to raise this in case it's a useful canary in the coal mine.

@sfackler sfackler added I-compilemem Issue: Problems and improvements with respect to memory usage during compilation. regression-from-stable-to-beta Performance or correctness regression from stable to beta. labels Aug 10, 2017
@arielb1
Copy link
Contributor

arielb1 commented Aug 10, 2017

= note: "cc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/home/travis/.rustup/toolchains/beta-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "/home/travis/build/rust-lang/crates.io/target/debug/deps/server-9f3e8bea00ce5d30.0.o" "-o" "/home/travis/build/rust-lang/crates.io/target/debug/deps/server-9f3e8bea00ce5d30"

This looks like linking the final executable - I don't think there should be any Rust code running in parallel.

@Mark-Simulacrum Mark-Simulacrum added the C-bug Category: This is a bug. label Aug 11, 2017
@alexcrichton
Copy link
Member

  • -Z time-passes for the stable and beta compilers show nothing major that changed for the server binary.
  • The target directory grows by ~100MB from stable to beta, with the whole thing being on the order of a gigabyte.o.
  • Link time for beta and stable is about the same hovering around 4-5s locally using ld.bfd.
  • massif shows the peak memory usage of the linker is ~270MB for stable and ~290MB for beta

Nothing looks too out of place? Maybe the travis machines were just getting unlucky?

@sfackler
Copy link
Member

It seems a bit strange that there's 10% more stuff being generated by beta compared to stable, right?

@carols10cents
Copy link
Member Author

As discussed in the core team meeting, @alexcrichton was testing on crates.io master, whereas I was seeing pretty consistent failures on rust-lang/crates.io#869 (without the workaround commit I added to that PR that switches travis from sudo: false to sudo: required. Alex said he would redo his tests on that PR ❤️

@alexcrichton
Copy link
Member

I tested for that PR as well but unfortunatel all the data is the same. There's very little difference between beta/stable on crates.io w/ that codebase. My only real guess here is that crates.io was hovering around some memory limit previously and just happened to blow it this time around

@carols10cents
Copy link
Member Author

My only real guess here is that crates.io was hovering around some memory limit previously and just happened to blow it this time around

@wycats seemed to indicate we should have been seeing more intermittent failures over time if that was the case; these cropped up suddenly.

That PR was adding a few more crates?

Do you think we should just close this? I'd love to track memory usage of crates.io compilations in travis over time, but it feels nontrivial to set up....

@alexcrichton
Copy link
Member

Since crates.io builds on stable it's not totally crazy that we might be able to add it to perf.rust-lang.org! It may take some time to fully integrate it and get good timings though. In the meantime though I think I'm at least personally out of ideas, although we could always leave the regression open to see if others would like to give it a stab

@Marwes
Copy link
Contributor

Marwes commented Aug 21, 2017

Seeing this in gluon as well. https://travis-ci.org/gluon-lang/gluon/jobs/266597837. Is enabling sudo the only workaround for now?

@alexcrichton
Copy link
Member

@Marwes the error there is "No space left on device", is that related to this issue?

@Marwes
Copy link
Contributor

Marwes commented Aug 22, 2017

@alexcrichton Hmm, I guess that would be disk space actually. Started seeing various crashes when linking in beta and figured that this issue would be related.

@jonhoo
Copy link
Contributor

jonhoo commented Aug 22, 2017

I'm seeing this on a binary that depends on the fantoccini crate, which uses futures heavily. I wonder if it's somehow related to #38528? On nightly, the compilation simply fails with error: Could not compile "foo"., and --verbose simply adds process didn't exit successfully. It's only on beta that the fatal runtime error: allocator memory exhausted error gets printed.

Edit: actually, I guess my issue is #43613. Maybe the two are related?

@alexcrichton alexcrichton modified the milestone: 1.20 Aug 23, 2017
@alexcrichton
Copy link
Member

This regression's gonna slip to stable as there's no clear action to take.

@alexcrichton alexcrichton added regression-from-stable-to-stable Performance or correctness regression from one stable version to another. and removed regression-from-stable-to-beta Performance or correctness regression from stable to beta. labels Aug 23, 2017
jonhoo added a commit to jonhoo/fantoccini that referenced this issue Sep 1, 2017
@jonhoo
Copy link
Contributor

jonhoo commented Sep 1, 2017

This is causing me issues now whenever I rely on hyper (which is now entirely futures based). It only takes a couple of requests before the program can't be compiled :/

@Hoverbear
Copy link
Contributor

I have a similar example: https://travis-ci.org/Hoverbear/digitalocean/builds/258872174

@nikomatsakis
Copy link
Contributor

We've made several improvements to memory usage now -- is there a way to verify if there has been progress here?

@jonhoo
Copy link
Contributor

jonhoo commented Sep 14, 2017

The fix for #43613 solved all the issues I was having.

@Marwes
Copy link
Contributor

Marwes commented Sep 15, 2017

Tried running gluon without sudo (4GB ram instead of 7.5GB) on travis a few times and couldn't get it to crash on either stable, beta or nightly which was odd given that the changes shouldn't have propagated to stable or beta yet. Can't say either way if it is fixed or not. Can just open another issue if it reappears.

@nikomatsakis
Copy link
Contributor

OK, I'll close this for now.

@stochastic-thread
Copy link

this still happens on a 2GB DigitalOcean "droplet" when trying to install parity (https://github.com/paritytech/parity)

phrohdoh pushed a commit to phrohdoh/fantoccini that referenced this issue May 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: This is a bug. I-compilemem Issue: Problems and improvements with respect to memory usage during compilation. regression-from-stable-to-stable Performance or correctness regression from one stable version to another.
Projects
None yet
Development

No branches or pull requests

10 participants