Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
cmd/compile: out of memory compiling cmd/compile/internal/ssa with 1GB RAM #27739
Building tip on Ubuntu 18.04 on a Digital Ocean VM with one 1GB of RAM at commit 83dfc3b
I can reproduce this out-of-memory condition 100% of the time (in the prove pass in SSA):
1GB of memory has been more than enough to build the toolchain in the past.
Barring any clever ideas about how to debug this, I'll try to bisect and hope that there was only one commit that reliably introduced this regression.
Go 1.11 compiler does more work, which of course has more code. So it is a double factor here if using Go 1.11 compiler to compile Go 1.11 compiler: it (may) use more memory (even compiling the same code), and it compiles more code.
Maybe a workaround is to use an older (or newer) bootstrap compiler?
When bootstrapping with 1.10.4, bisect blames cc09212.
That commit doesn't make much sense as the culprit either, but both bisects point to a regression introduced somewhere in or around May of this year.
What are the stats of the vm you are using? How many cores? Is any swap configured? From my informal testing you need ~768mb of ram per core on a 64bit machine to complete ./all.bash.…
On 19 Sep 2018, at 06:54, Phil ***@***.***> wrote: make.bash on go1.11 using go1.10.4 as a bootstrap still fails. I'm bisecting the same commit range. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
It's the cheapest Digital Ocean VM. 1 vCPU, 1GB memory, no swap:
It's less important to me how many resources one needs to build the toolchain, and more important that things are moving in the wrong direction. Shouldn't peak resource consumption by the compiler be determined by either the largest package (in the compiler front-end) or the largest function (in the back-end)? The out-of-memory condition doesn't occur in the linker, where I would expect resource consumption to grow in concert with the repo growing ~10% more code.
I think we should have a builder that has a relatively small amount of memory. Somewhat related, in #26867 I reported how
We can assume that Linux is fairly common on machines with less memory (small computers, routers, VMs, etc), and that a 64-bit architecture like amd64 should stress test the memory more than a 32-bit architecture would.
We already have special builders like
@mvdan, let's not combine two bugs into one. It's hard to label & track that way.
Could you file a separate builder bug about a small config? (but perhaps we could just make an existing builder (cgo, noopt?) be the small one... you could float that in the bug, or I could reply there later)
I'm going to remove the "Builders" label from this bug.
referenced this issue
Oct 25, 2018
changed the title from
make.bash: out of memory
cmd/compile: out of memory compiling cmd/compile/internal/ssa with 1GB RAM
Dec 13, 2018
@josharian I agree that #20104 would reduce memory pressure when compiling cmd/compile/internal/ssa. However, you'll notice the compiler fails in the second bootstrap phase rather than the first, which means compiling the current code with the old compiler succeeds, but compiling the same code with the current compiler fails. In other words, the regression is in the compiler's memory use, rather than the size of the code to be compiled. The regression in the compiler's performance seems higher-priority to me, since it impacts more than just folks working on the Go compiler itself.