Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: make the page allocator scale #35112

Open
mknyszek opened this issue Oct 23, 2019 · 39 comments
Assignees
Labels
Milestone

Comments

@mknyszek
Copy link
Contributor

@mknyszek mknyszek commented Oct 23, 2019

Over the course of the last year, we've seen several cases where making relatively minor changes to the allocator slow path, which allocates pages from the heap, caused serious performance issues (#32828, #31678). The problem stemmed largely from contention on the heap lock: a central m-based spinlock (so we can't schedule another g while it's waiting!) which guards nearly all operations on the page heap. Since Go 1.11, (and likely earlier) we've observed barging behavior on this lock in applications which allocate larger objects (~1K) frequently, which indicates a collapsing lock. Furthermore, these applications tend to stay the same or worsen in performance as the number of available cores on the machine increases. This represents a fundamental scalability bottleneck in the runtime.

Currently the page allocator is based on a treap (and formerly had a small linked-list based cache for smaller free spans), but I propose we rethink it and rewrite to something that:

  1. Is more cache-friendly, branch-predictor friendly, etc. on the whole.
  2. Has a lockless fast path.

The former just makes the page allocator faster and less likely to fall into bad paths in the microarchitecture, while the latter directly reduces contention on the lock.

While increased performance across the board is what we want, what we're most interested in solving here is the scalability bottleneck: when we increase the number of cores available to a Go application, we want to see a performance improvement.

Edit: Design doc

I've already built both an out-of-tree and in-tree prototype of a solution which is based on a bitmap representing the whole heap and a radix tree over that bitmap. The key idea with bitmaps here being that we may "land grab" several pages at once and cache them in a P. Two RPC benchmarks, a Google internal one, and one based on Tile38, show up to a 30% increase in throughput and up to a 40% drop in tail latencies (90th, 99th percentile) on a 48-core machine, with the benefit increasing with the number of cores available to the benchmark.

@mknyszek mknyszek added this to the Go1.14 milestone Oct 23, 2019
@mknyszek mknyszek self-assigned this Oct 23, 2019
@ALTree ALTree added the Performance label Oct 23, 2019
@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/200439 mentions this issue: runtime: place lower limit on trigger ratio

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/195698 mentions this issue: runtime: count scavenged bits for new allocation for new page allocator

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/190621 mentions this issue: runtime: add packed bitmap summaries

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/196640 mentions this issue: runtime: make more page sweeper operations atomic

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/195697 mentions this issue: runtime: add scavenging code for new page allocator

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/196643 mentions this issue: runtime: add page cache and tests

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/190622 mentions this issue: runtime: add new page allocator core

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/190619 mentions this issue: runtime: add new page allocator constants and description

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/196642 mentions this issue: runtime: add per-p mspan cache

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/201763 mentions this issue: runtime: make the scavenger self-paced

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/190620 mentions this issue: runtime: add mallocbits and tests

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/196639 mentions this issue: runtime: remove unnecessary large parameter to mheap_.alloc

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/195701 mentions this issue: runtime: add per-p page allocation cache

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/201764 mentions this issue: runtime: integrate new page allocator into runtime

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/195700 mentions this issue: runtime: remove old page allocator

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/195699 mentions this issue: runtime: add option to scavenge with lock held throughout

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/201765 mentions this issue: runtime: switch to new page allocator

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/196638 mentions this issue: runtime: ensure heap memstats are updated atomically

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 23, 2019

Change https://golang.org/cl/196641 mentions this issue: runtime: refactor mheap_.alloc* into allocSpan

@mknyszek

This comment has been minimized.

Copy link
Contributor Author

@mknyszek mknyszek commented Oct 23, 2019

Gopherbot isn't updating the issue with this, but the design doc is at @ https://go-review.googlesource.com/c/proposal/+/202857. I'll update the first post in the issue once it lands.

CC: @aclements @randall77 @cherrymui

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 24, 2019

Change https://golang.org/cl/203318 mentions this issue: runtime: fix (*gcSweepBuf).block guarantees

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 28, 2019

Change https://golang.org/cl/203858 mentions this issue: runtime: compute whether a span needs zeroing in the new page allocator

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Oct 28, 2019

Change https://golang.org/cl/203859 mentions this issue: runtime: make allocNeedsZero lock-free

@un000

This comment has been minimized.

Copy link

@un000 un000 commented Oct 29, 2019

#31222
May be related
Could you test the new allocator with a reproducer from the ticket above?

@mknyszek

This comment has been minimized.

Copy link
Contributor Author

@mknyszek mknyszek commented Oct 29, 2019

@un000 It's possible but unlikely this will help.

Let's assume that the problem in that issue is fully that a long allocation blocks STW for 43 ms. Firstly, 100 MB allocations don't really get any faster with this proposal. In that issue, asking for 100 MB is definitely going to go to the OS and map pages in. No matter what we're bound by how quickly we can get 100 MB worth of committed address space from the OS.

@mknyszek

This comment has been minimized.

Copy link
Contributor Author

@mknyszek mknyszek commented Oct 29, 2019

@un000 Well, it helps maxWait at least and brings it down to 1.0s vs. 1.3s on my machine, so perhaps I spoke too soon.

But I'm hitting stalls trying to copy the traces to verify whether there's an improvement in the STW delay. I'll try again later and let you know what I found.

gopherbot pushed a commit that referenced this issue Nov 7, 2019
This change adds a scavenger for the new page allocator along with
tests. The scavenger walks over the heap backwards once per GC, looking
for memory to scavenge. It walks across the heap without any lock held,
searching optimistically. If it finds what appears to be a scavenging
candidate it acquires the heap lock and attempts to verify it. Upon
verification it then scavenges.

Notably, unlike the old scavenger, it doesn't show any preference for
huge pages and instead follows a more strict last-page-first policy.

Updates #35112.

Change-Id: I0621ef73c999a471843eab2d1307ae5679dd18d6
Reviewed-on: https://go-review.googlesource.com/c/go/+/195697
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 7, 2019
This change makes it so that the new page allocator returns the number
of pages that are scavenged in a new allocation so that mheap can update
memstats appropriately.

The accounting could be embedded into pageAlloc, but that would make
the new allocator more difficult to test.

Updates #35112.

Change-Id: I0f94f563d7af2458e6d534f589d2e7dd6af26d12
Reviewed-on: https://go-review.googlesource.com/c/go/+/195698
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 7, 2019
This change adds a "locked" parameter to scavenge() and scavengeone()
which allows these methods to be run with the heap lock acquired, and
synchronously with respect to others which acquire the heap lock.

This mode is necessary for both heap-growth scavenging (multiple
asynchronous scavengers here could be problematic) and
debug.FreeOSMemory.

Updates #35112.

Change-Id: I24eea8e40f971760999c980981893676b4c9b666
Reviewed-on: https://go-review.googlesource.com/c/go/+/195699
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
gopherbot pushed a commit that referenced this issue Nov 7, 2019
Currently the runtime background scavenger is paced externally,
controlled by a collection of variables which together describe a line
that we'd like to stay under.

However, the line to stay under is computed as a function of the number
of free and unscavenged huge pages in the heap at the end of the last
GC. Aside from this number being inaccurate (which is still acceptable),
the scavenging system also makes an order-of-magnitude assumption as to
how expensive scavenging a single page actually is.

This change simplifies the scavenger in preparation for making it
operate on bitmaps. It makes it so that the scavenger paces itself, by
measuring the amount of time it takes to scavenge a single page. The
scavenging methods on mheap already avoid breaking huge pages, so if we
scavenge a real huge page, then we'll have paced correctly, otherwise
we'll sleep for longer to avoid using more than scavengePercent wall
clock time.

Unfortunately, all this involves measuring time, which is quite tricky.
Currently we don't directly account for long process sleeps or OS-level
context switches (which is quite difficult to do in general), but we do
account for Go scheduler overhead and variations in it by maintaining an
EWMA of the ratio of time spent scavenging to the time spent sleeping.
This ratio, as well as the sleep time, are bounded in order to deal with
the aforementioned OS-related anomalies.

Updates #35112.

Change-Id: Ieca8b088fdfca2bebb06bcde25ef14a42fd5216b
Reviewed-on: https://go-review.googlesource.com/c/go/+/201763
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 7, 2019
This change integrates all the bits and pieces of the new page allocator
into the runtime, behind a global constant.

Updates #35112.

Change-Id: I6696bde7bab098a498ab37ed2a2caad2a05d30ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/201764
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 7, 2019
This change adds the allocNeedZero method to mheap which uses the new
heapArena field zeroedBase to determine whether a new allocation needs
zeroing. The purpose of this work is to avoid zeroing memory that is
fresh from the OS in the context of the new allocator, where we no
longer have the concept of a free span to track this information.

The new field in heapArena, zeroedBase, is small, which runs counter to
the advice in the doc comment for heapArena. Since heapArenas are
already not a multiple of the system page size, this advice seems stale,
and we're OK with using an extra physical page for a heapArena. So, this
change also deletes the comment with that advice.

Updates #35112.

Change-Id: I688cd9fd3c57a98a6d43c45cf699543ce16697e2
Reviewed-on: https://go-review.googlesource.com/c/go/+/203858
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Nov 7, 2019

Change https://golang.org/cl/205937 mentions this issue: runtime: define maximum supported physical page and huge page sizes

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Nov 7, 2019

Change https://golang.org/cl/205938 mentions this issue: runtime: make the test addresses for pageAlloc smaller on 32-bit

gopherbot pushed a commit that referenced this issue Nov 7, 2019
This change flips the oldPageAllocator constant enabling the new page
allocator in the Go runtime.

Updates #35112.

Change-Id: I7fc8332af9fd0e43ce28dd5ebc1c1ce519ce6d0c
Reviewed-on: https://go-review.googlesource.com/c/go/+/201765
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change removes the old page allocator from the runtime.

Updates #35112.

Change-Id: Ib20e1c030f869b6318cd6f4288a9befdbae1b771
Reviewed-on: https://go-review.googlesource.com/c/go/+/195700
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
In preparation for a lockless fast path in the page allocator, this
change makes it so that checking if an allocation needs to be zeroed may
be done atomically.

Unfortunately, this means there is a CAS-loop to ensure monotonicity of
the zeroedBase value in heapArena. This CAS-loop exits if an allocator
acquiring memory further on in the arena wins or if it succeeds. The
CAS-loop should have a relatively small amount of contention because of
this monotonicity, though it would be ideal if we could just have
CAS-ers with the greatest value always win. The CAS-loop is unnecessary
in the steady-state, but should bring some start-up performance gains as
it's likely cheaper than the additional zeroing required, especially for
large allocations.

For very large allocations that span arenas, the CAS-loop should be
completely uncontended for most of the arenas it touches, it may only
encounter contention on the first and last arena.

Updates #35112.

Change-Id: If3d19198b33f1b1387b71e1ce5902d39a5c0f98e
Reviewed-on: https://go-review.googlesource.com/c/go/+/203859
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
For the most part, heap memstats are already updated atomically when
passed down to OS-level memory functions (e.g. sysMap). Elsewhere,
however, they're updated with the heap lock.

In order to facilitate holding the heap lock for less time during
allocation paths, this change more consistently makes the update of
these statistics atomic by calling mSysStat{Inc,Dec} appropriately
instead of simply adding or subtracting. It also ensures these values
are loaded atomically.

Furthermore, an undocumented but safe update condition for these
memstats is during STW, at which point using atomics is unnecessary.
This change also documents this condition in mstats.go.

Updates #35112.

Change-Id: I87d0b6c27b98c88099acd2563ea23f8da1239b66
Reviewed-on: https://go-review.googlesource.com/c/go/+/196638
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change defines a maximum supported physical and huge page size in
the runtime based on the new page allocator's implementation, and uses
them where appropriate.

Furthemore, if the system exceeds the maximum supported huge page
size, we simply ignore it silently.

It also fixes a huge-page-related test which is only triggered by a
condition which is definitely wrong.

Finally, it adds a few TODOs related to code clean-up and supporting
larger huge page sizes.

Updates #35112.
Fixes #35431.

Change-Id: Ie4348afb6bf047cce2c1433576d1514720d8230f
Reviewed-on: https://go-review.googlesource.com/c/go/+/205937
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
mheap_.alloc currently accepts both a spanClass and a "large" parameter
indicating whether the allocation is large. These are redundant, since
spanClass.sizeclass() == 0 is an equivalent way to determine this and is
already used in mheap_.alloc. There are no places in the runtime where
the size class could be non-zero and large == true.

Updates #35112.

Change-Id: Ie66facf8f0faca6f4cd3d20a8ac4bc259e11823d
Reviewed-on: https://go-review.googlesource.com/c/go/+/196639
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change makes it so that allocation and free related page sweeper
metadata operations (e.g. pageInUse and pagesInUse) are atomic rather
than protected by the heap lock. This will help in reducing the length
of the critical path with the heap lock held in future changes.

Updates #35112.

Change-Id: Ie82bff024204dd17c4c671af63350a7a41add354
Reviewed-on: https://go-review.googlesource.com/c/go/+/196640
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
Currently gcSweepBuf guarantees that push operations may be performed
concurrently with each other and that block operations may be performed
concurrently with push operations as well.

Unfortunately, this isn't quite true. The existing code allows push
operations to happen concurrently with each other, but block operations
may return blocks with nil entries. The way this can happen is if two
concurrent pushers grab a slot to push to, and the first one (the one
with the earlier slot in the buffer) doesn't quite write a span value
when the block is called. The existing code in block only checks if the
very last value in the block is nil, when really an arbitrary number of
the last few values in the block may or may not be nil.

Today, this case can't actually happen because when push operations
happen concurrently during a GC (which is the only time block is
called), they only ever happen during an allocation with the heap lock
held, effectively serializing them. A block operation may happen
concurrently with one of these pushes, but its callers will never see a
nil mspan. Outside of a GC, this isn't a problem because although push
operations from allocations can run concurrently with push operations
from sweeping, block operations will never run.

In essence, the real concurrency guarantees provided by gcSweepBuf are
that block operations may happen concurrently with push operations, but
that push operations may not be concurrent with each other if there are
any block operations.

To fix this, and to prepare for push operations happening without the
heap lock held in a future CL, we update the documentation for block to
correctly state that there may be nil entries in the returned slice.
While we're here, make the mspan writes into the buffer atomic to avoid
a block user racing on a nil check, and document that the user should
load mspan values from the returned slice atomically. Finally, we make
all callers of block adhere to the new rules.

We choose to allow nil values rather than filter them out because the
only caller of block is markrootSpans, and if it catches a nil entry,
then there wasn't anything to mark in there anyway since the span is
just being created.

Updates #35112.

Change-Id: I6450aab15f51690d7a000ba5b3d529cf2ca5da1e
Reviewed-on: https://go-review.googlesource.com/c/go/+/203318
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change combines the functionality of allocSpanLocked, allocManual,
and alloc_m into a new method called allocSpan. While these methods'
abstraction boundaries are OK when the heap lock is held throughout,
they start to break down when we want finer-grained locking in the page
allocator.

allocSpan does just that, and only locks the heap when it absolutely has
to. Piggy-backing off of work in previous CLs to make more of span
initialization lockless, this change makes span initialization entirely
lockless as part of the reorganization.

Ultimately this change will enable us to add a lockless fast path to
allocSpan.

Updates #35112.

Change-Id: I99875939d75fb4e958a67ac99e4a7cda44f06864
Reviewed-on: https://go-review.googlesource.com/c/go/+/196641
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change adds a per-p mspan object cache similar to the sudog cache.
Unfortunately this cache can't quite operate like the sudog cache, since
it is used in contexts where write barriers are disallowed (i.e.
allocation codepaths), so rather than managing an array and a slice,
it's just an array and a length. A little bit more unsafe, but avoids
any write barriers.

The purpose of this change is to reduce the number of operations which
require the heap lock in allocation, paving the way for a lockless fast
path.

Updates #35112.

Change-Id: I32cfdcd8528fb7be985640e4f3a13cb98ffb7865
Reviewed-on: https://go-review.googlesource.com/c/go/+/196642
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change adds a page cache structure which owns a chunk of free pages
at a given base address. It also adds code to allocate to this cache
from the page allocator. Finally, it adds tests for both.

Notably this change does not yet integrate the code into the runtime,
just into runtime tests.

Updates #35112.

Change-Id: Ibe121498d5c3be40390fab58a3816295601670df
Reviewed-on: https://go-review.googlesource.com/c/go/+/196643
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Nov 8, 2019
This change adds a per-p free page cache which the page allocator may
allocate out of without a lock. The change also introduces a completely
lockless page allocator fast path.

Although the cache contains at most 64 pages (and usually less), the
vast majority (85%+) of page allocations are exactly 1 page in size.

Updates #35112.

Change-Id: I170bf0a9375873e7e3230845eb1df7e5cf741b78
Reviewed-on: https://go-review.googlesource.com/c/go/+/195701
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Nov 8, 2019

Change https://golang.org/cl/206199 mentions this issue: runtime: copy some functions from math/bits to runtime/internal/sys

gopherbot pushed a commit that referenced this issue Nov 8, 2019
CL 201765 activated calls from the runtime to functions in math/bits.
When coverage and race detection were simultaneously enabled,
this caused a crash when the covered+race-checked code in
math/bits was called from the runtime before there was even a P.

PS Win for gdlv in helping sort this out.

TODO - next CL intrinsifies the new functions in
runtime/internal/sys

TODO/Would-be-nice - Ctz64 and TrailingZeros64 are the same
function; 386.s is intrinsified; clean all that up.

Fixes #35461.
Updates #35112.

Change-Id: I750a54dba493130ad3e68a06530ede7687d41e1d
Reviewed-on: https://go-review.googlesource.com/c/go/+/206199
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Nov 9, 2019

Change https://golang.org/cl/206277 mentions this issue: runtime: fix min/max logic in findScavengeCandidate

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Nov 9, 2019

Change https://golang.org/cl/206200 mentions this issue: cmd/compile: intrinsify functions added to runtime/internal/sys

gopherbot pushed a commit that referenced this issue Nov 9, 2019
This restores intrinsic status to functions copied from math/bits
into runtime/internal/sys, as an aid to runtime performance.

Updates #35112.

Change-Id: I41a7d87cf00f1e64d82aa95c5b1000bc128de820
Reviewed-on: https://go-review.googlesource.com/c/go/+/206200
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
gopherbot pushed a commit that referenced this issue Nov 10, 2019
This change makes the test addresses start at 1 GiB instead of 2 GiB to
support mips and mipsle, which only have 31-bit address spaces.

It also changes some tests to use smaller offsets for the chunk index to
avoid jumping too far ahead in the address space to support 31-bit
address spaces. The tests don't require such large jumps for what
they're testing anyway.

Updates #35112.
Fixes #35440.

Change-Id: Ic68ff2b0a1f10ef37ac00d4bb5b910ddcdc76f2e
Reviewed-on: https://go-review.googlesource.com/c/go/+/205938
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
gopherbot pushed a commit that referenced this issue Nov 11, 2019
Before this CL, if max > min and max was unaligned to min, then the
function could return an unaligned (unaligned to min) region to
scavenge. On most platforms, this leads to some kind of crash.

Fix this by explicitly aligning max to the next multiple of min.

Fixes #35445.
Updates #35112.

Change-Id: I0af42d4a307b48a97e47ed152c619d77b0298291
Reviewed-on: https://go-review.googlesource.com/c/go/+/206277
Reviewed-by: Ian Lance Taylor <iant@golang.org>
@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Nov 13, 2019

Change https://golang.org/cl/206978 mentions this issue: runtime: check summary before scavenging in fast path

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.