Skip to content

Commit

Permalink
runtime: start concurrent GC promptly when we reach its trigger
Browse files Browse the repository at this point in the history
Currently, when allocation reaches the concurrent GC trigger size, we
start the concurrent collector by ready'ing its G. This simply puts it
on the end of the P's run queue, which means we may not actually start
GC for some time as the current G continues to run and then the P
drains other Gs already on its run queue. Since the mutator can
continue to allocate, the heap can potentially be much larger than we
intended by the time GC actually starts. Furthermore, how much larger
is difficult to predict since it depends on the scheduler.

Fix this by preempting the current G and switching directly to the
concurrent GC G as soon as we reach the trigger heap size.

On the garbage benchmark from the benchmarks subrepo with
GOMAXPROCS=4, this reduces the time from triggering the GC to the
beginning of sweep termination by 10 to 30 milliseconds, which reduces
allocation after the trigger by up to 10MB (a large fraction of the
64MB live heap the benchmark tries to maintain).

One other known source of delay before we "really" start GC is the
sweep finalization performed before sweep termination. This has
similar negative effects on heap size and predictability, but is an
orthogonal problem. This change adds a TODO for this.

Change-Id: I8bae98cb43685c1bf353ff55868e4647e3743c47
Reviewed-on: https://go-review.googlesource.com/8513
Reviewed-by: Rick Hudson <rlh@golang.org>
  • Loading branch information
aclements committed Apr 10, 2015
1 parent 6afb5fa commit 4b956ae
Show file tree
Hide file tree
Showing 2 changed files with 51 additions and 1 deletion.
21 changes: 20 additions & 1 deletion src/runtime/mgc.go
Original file line number Diff line number Diff line change
Expand Up @@ -261,10 +261,22 @@ func startGC(mode int) {
if !bggc.started {
bggc.working = 1
bggc.started = true
// This puts the G on the end of the current run
// queue, so it may take a while to actually start.
// This is only a problem for the first GC cycle.
go backgroundgc()
} else if bggc.working == 0 {
bggc.working = 1
ready(bggc.g, 0)
if getg().m.lockedg != nil {
// We can't directly switch to GC on a locked
// M, so put it on the run queue and someone
// will get to it.
ready(bggc.g, 0)
} else {
unlock(&bggc.lock)
readyExecute(bggc.g, 0)
return
}
}
unlock(&bggc.lock)
}
Expand Down Expand Up @@ -299,6 +311,13 @@ func gc(mode int) {
semacquire(&worldsema, false)

// Pick up the remaining unswept/not being swept spans concurrently
//
// TODO(austin): If the last GC cycle shrank the heap, our 1:1
// sweeping rule will undershoot and we'll wind up doing
// sweeping here, which will allow the mutator to do more
// allocation than we intended before we "really" start GC.
// Compute an allocation sweep ratio so we're done sweeping by
// the time we hit next_gc.
for gosweepone() != ^uintptr(0) {
sweep.nbgsweep++
}
Expand Down
31 changes: 31 additions & 0 deletions src/runtime/proc1.go
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,37 @@ func ready(gp *g, traceskip int) {
}
}

// readyExecute marks gp ready to run, preempt the current g, and execute gp.
// This is used to start concurrent GC promptly when we reach its trigger.
func readyExecute(gp *g, traceskip int) {
mcall(func(_g_ *g) {
if trace.enabled {
traceGoUnpark(gp, traceskip)
traceGoSched()
}

if _g_.m.locks != 0 {
throw("readyExecute: holding locks")
}
if _g_.m.lockedg != nil {
throw("cannot readyExecute from a locked g")
}
if readgstatus(gp)&^_Gscan != _Gwaiting {
dumpgstatus(gp)
throw("bad gp.status in readyExecute")
}

// Preempt the current g
casgstatus(_g_, _Grunning, _Grunnable)
runqput(_g_.m.p, _g_)
dropg()

// Ready gp and switch to it
casgstatus(gp, _Gwaiting, _Grunnable)
execute(gp)
})
}

func gcprocs() int32 {
// Figure out how many CPUs to use during GC.
// Limited by gomaxprocs, number of actual CPUs, and MaxGcproc.
Expand Down

0 comments on commit 4b956ae

Please sign in to comment.