Skip to content

Commit

Permalink
runtime: don't corrupt arena bounds on low mmap
Browse files Browse the repository at this point in the history
If mheap.sysAlloc doesn't have room in the heap arena for an
allocation, it will attempt to map more address space with sysReserve.
sysReserve is given a hint, but can return any unused address range.
Currently, mheap.sysAlloc incorrectly assumes the returned region will
never fall between arena_start and arena_used. If it does,
mheap.sysAlloc will blindly accept the new region as the new
arena_used and arena_end, causing these to decrease and make it so any
Go heap above the new arena_used is no longer considered part of the
Go heap. This assumption *used to be* safe because we had all memory
between arena_start and arena_used mapped, but when we switched to an
arena_start of 0 on 32-bit, it became no longer safe.

Most likely, we've only recently seen this bug occur because we
usually start arena_used just above the binary, which is low in the
address space. Hence, the kernel is very unlikely to give us a region
before arena_used.

Since mheap.sysAlloc is a linear allocator, there's not much we can do
to handle this well. Hence, we fix this problem by simply rejecting
the new region if it isn't after arena_end. In this case, we'll take
the fall-back path and mmap a small region at any address just for the
requested memory.

Fixes #20259.

Change-Id: Ib72e8cd621545002d595c7cade1e817cfe3e5b1e
Reviewed-on: https://go-review.googlesource.com/43870
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
  • Loading branch information
aclements committed May 23, 2017
1 parent 95d991d commit e5a5c03
Showing 1 changed file with 13 additions and 1 deletion.
14 changes: 13 additions & 1 deletion src/runtime/malloc.go
Original file line number Diff line number Diff line change
Expand Up @@ -411,12 +411,14 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
if p == 0 {
return nil
}
// p can be just about anywhere in the address
// space, including before arena_end.
if p == h.arena_end {
// The new reservation is contiguous
// with the old reservation.
h.arena_end = new_end
h.arena_reserved = reserved
} else if h.arena_start <= p && p+p_size-h.arena_start-1 <= _MaxMem {
} else if h.arena_end < p && p+p_size-h.arena_start-1 <= _MaxMem {
// We were able to reserve more memory
// within the arena space, but it's
// not contiguous with our previous
Expand All @@ -430,6 +432,16 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
h.setArenaUsed(used, false)
h.arena_reserved = reserved
} else {
// We got a mapping, but it's not
// linear with our current arena, so
// we can't use it.
//
// TODO: Make it possible to allocate
// from this. We can't decrease
// arena_used, but we could introduce
// a new variable for the current
// allocation position.

// We haven't added this allocation to
// the stats, so subtract it from a
// fake stat (but avoid underflow).
Expand Down

0 comments on commit e5a5c03

Please sign in to comment.