Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: large address space footprint on 32-bit linux #35677

Open
tmm1 opened this issue Nov 18, 2019 · 4 comments
Milestone

Comments

@tmm1
Copy link
Contributor

@tmm1 tmm1 commented Nov 18, 2019

What version of Go are you using (go version)?

$ go version
go version go1.13.4 linux/arm

Summary

Running a simple golang program on linux/arm (or other 32-bit linux arch) shows ~800MB of the virtual address space is reserved up front:

package main

import (
	"fmt"
	"os"
	"os/exec"
)

func main() {
	cmd := exec.Command("pmap", "-x", fmt.Sprint(os.Getpid()))
	cmd.Stdout = os.Stdout
	cmd.Run()
}
8627:   /tmp/testpmap
Address   Kbytes     RSS   Dirty Mode  Mapping
00010000     624     444     444 r-x-- testpmap
000b0000     704     176     176 r---- testpmap
00160000      72      36      36 rw--- testpmap
00172000      68      28      28 rw---   [ anon ]
01400000    4096     196     196 rw---   [ anon ]
01800000  524288       0       0 -----   [ anon ]
66cb4000     644      52      52 rw---   [ anon ]
66d55000  264060       0       0 -----   [ anon ]
76f34000     256      32      32 rw---   [ anon ]
76f74000       4       0       0 r-x--   [ anon ]
7ee2c000     132      12      12 rw---   [ stack ]
ffff0000       4       0       0 r-x--   [ anon ]
-------- ------- ------- -------
total kB  794952     976     976

The Linux kernel can be compiled with one of three different vmsplit modes: VMSPLIT_3G, VMSPLIT_2G, VMSPLIT_1G


I have a golang program that makes heavy use of memory mapped files, and runs on ARM appliances where the vendor has compiled their kernel with VMSPLIT_2G. Since the golang runtime reserves 40% of the available address space up-front, my program is limited in the amount of files it can mmap for its own purposes. When the address space is exhausted, bad things happen including panics when trying to spawn threads:

runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0xb68bd6 m=2 sigcode=4294967290

The golang runtime documents its memory mappings in malloc.go. The ~800MB reserved is made up of two large reservations: the first is ~258MB and the second is 512MB.

go/src/runtime/malloc.go

Lines 541 to 545 in a23f9af

// 1. We reserve space for all heapArenas up front so
// they don't get interleaved with the heap. They're
// ~258MB, so this isn't too bad. (We could reserve a
// smaller amount of space up front if this is a
// problem.)

go/src/runtime/malloc.go

Lines 551 to 552 in a23f9af

// 3. We try to stake out a reasonably large initial
// heap reservation.


The 512MB initial reservation can be tweaked with a patch to arenaSizes:

go/src/runtime/malloc.go

Lines 586 to 592 in a23f9af

arenaSizes := []uintptr{
512 << 20,
256 << 20,
128 << 20,
}
for _, arenaSize := range arenaSizes {
a, size := sysReserveAligned(unsafe.Pointer(p), arenaSize, heapArenaBytes)

However, it is not clear to me how I can reduce the ~258MB reservation. The code comment states "We could reserve a smaller amount of space up front if this is problem.", so I'm looking for some guidance on how to do this.

Since userspace on 32-bit linux can access a maximum of 3GB of memory (when the kernel is compiled with VMSPLIT_3G [which is the default]), it seems like at the very least the 258MB reservation could be reduced by 1/4 to ~198MB.

cc @aclements

@toothrot toothrot added this to the Backlog milestone Nov 18, 2019
@toothrot

This comment has been minimized.

Copy link
Contributor

@toothrot toothrot commented Nov 18, 2019

/cc @rsc and @randall77 as runtime owners

I can't quite tell if this is a question, in which case https://golang.org/wiki/Questions may be a faster way for you to get a response (especially from golang-nuts, where it will reach a much broader audience).

@tmm1

This comment has been minimized.

Copy link
Contributor Author

@tmm1 tmm1 commented Nov 18, 2019

I think this qualifies as a bug. Succinctly put: on 32-bit linux, heapArenaAlloc reserves 258MB when it only needs at-most 198MB.

My question is mainly directly towards @aclements in regards to his code comment:

The code comment states "We could reserve a smaller amount of space up front if this is problem.", so I'm looking for some guidance on how to do this.

If I can get an answer to that, I am happy to open a CL that closes this issue.

@randall77

This comment has been minimized.

Copy link
Contributor

@randall77 randall77 commented Dec 3, 2019

@aclements

This comment has been minimized.

Copy link
Member

@aclements aclements commented Dec 3, 2019

Sorry, I'd missed this.

I think my comment was saying that it could literally just reserve less space for the heap arenas, and fall back to using fresh mmaps when mheap_.heapArenaAlloc is exhausted.

Right now, 32-bit reserves address space for all possible arenas just to avoid interleaving the memory reservations with the heap, since that would cause memory fragmentation and make it more likely that a large allocation would fail. But on 64-bit, we don't reserve any space up front for the arenas and instead just mmap them as we need them.

It's probably a good idea to still reserve some space on 32-bit for the area metadata, but it could probably be much less. We could also try to generate hint addresses that make it less likely that the heap and the heapArenas will collide. For example, we could continue to make space for all heapArenas before the initial heap hint, but just not reserve that space. Even without the reservation, the heap is likely to grow up from the hint and not interfere with the heapArenas space unless we're really tight on address space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.