Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: runtime: garbage collect goroutines blocked forever #19702

Closed
faiface opened this issue Mar 24, 2017 · 71 comments

Comments

Projects
None yet
@faiface
Copy link

commented Mar 24, 2017

Hi everyone!

As of today, Go's runtime does not garbage collect blocked goroutines. The most cited reason being, that goroutines blocked forever are usually a bug and that collecting them would hide this bug. I would like to show a few examples, where garbage collected goroutines would really be appreciated and lead to a much safer and less buggy code.

What does it mean?

package main

import "fmt"

func numbers() <-chan int {
	ch := make(chan int)
	go func() {
		for i := 0; ; i++ {
			ch <- i
		}
	}()
	return ch
}

func main() {
	for i := 0; i < 1000; i++ {
		for num := range numbers() {
			if num >= 1000 {
				break
			}
			fmt.Println(num)
		}
	}
}

This code has a memory leak in current Go. The function numbers returns a channel that generates an infinite sequence of natural numbers. We get this sequence 1000 times in the main function, print the first 1000 numbers of each sequence and quit. The numbers function spawns a goroutine that feeds the channel with numbers. However, once we print the first 1000 numbers produced by the goroutine, the goroutine stays in the memory, blocked forever. If the first for-loop iterated forever, we would run out of memory very quickly.

How to collect goroutines?

I'd suggest the following algorithm:

  1. All non-blocked goroutines are marked active.
  2. All channels not reachable by active goroutines are marked inactive.
  3. All gouroutines blocked on an inactive channel are marked dead and garbage collected.

Edit: Based on the discussion, I add one detail to the implementation. A goroutine marked dead would be collected in a way that's identical to calling runtime.Goexit inside that goroutine (https://golang.org/pkg/runtime/#Goexit).

Edit 2: Based on the further discussion, runtime.Goexit behavior is debatable and maybe not right.

What are the benefits?

Once such goroutine garbage collection is enabled, we can solve large variety of new problems.

  1. Infinite generator functions. Generator is a function that returns a channel that sends a stream of values. Heavily used in functional languages, they are currently hardly possible in Go (we'd have to send them a stopping signal).
  2. Finite generator functions. We can do them in Go, however, we have to drain the channel if we want to avoid a memory leak.
  3. Manager goroutines. A nice way to construct a concurrent-safe object in Go is to, instead of guarding it by a mutex, create a manager goroutine, that ranges over a channel that sends commands to this object. Manager goroutine then executes this commands as they come in. This is hard to do in Go, because if the object goes out of scope, manager goroutine stays in the memory forever.

All of the described problems are solvable. Manual memory management is also solvable. And this is just like that. Inconvenient, error-prone and preventing us from doing some advanced stuff, that would be really useful.

I'd like to point out, that the whole talk "Advanced Go Concurrency Patterns" would be pointless, if Go collected blocked goroutines.

What do you guys think?

@gopherbot gopherbot added this to the Proposal milestone Mar 24, 2017

@gopherbot gopherbot added the Proposal label Mar 24, 2017

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 24, 2017

This makes the garbage collector more expensive.

Also, how do you "mark dead" a goroutine? A goroutine can't be killed by another goroutine. Do you run deferred statements? Does the channel operation panic?

This would require a lot of design & implementation work for marginal benefit.

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 24, 2017

I'm not sure about the details, but it seems logical that the deferred statements would run and then the goroutine would be silently killed. It would not be killed by another goroutine, it would be killed by the garbage collector.

I personally don't think the benefits are marginal. It would make a lot of the obvious code work correctly.

If you watch Rob Pike's "Go Concurrency Patterns" talk, you see that his code examples would not be correct, if the goroutines would not be garbage collected like this. Also, in one point in the talk, being asked by some guy "What happens to that goroutine", he responds something like, "Don't worry, it'll be collected".

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 24, 2017

I'm not sure about the details,

The proposal process is really about the details, though.

but it seems logical that the deferred statements would run and then the goroutine would be silently killed.

What about locks it's currently holding? Would deferred statements be run?

What about the cases where it actually is a bug to have the leak and the goroutine shouldn't be silently swept under the rug?

Maybe an alternate proposal would be that the sender goroutine should have to explicitly select on the receiver going away:

func numbers() <-chan int {
	ch := make(chan int)
	go func() {
		for i := 0; ; i++ {
			select {
                        case ch <- i:
                        unreachable:  // language change, though.
                                // choose: panic, return silently, etc
                        }
		}
	}()
	return ch
}

But really, now that we have context in the standard library, the answer should be to start the generator goroutine with a context.

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 24, 2017

I think that deferred calls should be run. If one of the deferred calls is a mutex unlock, than it'll be run. If the blocked goroutine holds a mutex, but no deferred call unlocks that mutex, then it's a bug and although the goroutine will be collected, the mutex won't be unlocked which would definitely show the bug, just as good as a goroutine memory leak.

On the Context: of course, it's possible to use here, however, you have to manually cancel it. It's just like manual memory management.

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 24, 2017

Okay, so your proposal is that a channel send or receive operation that can be dynamically proven to never complete turns into a call to runtime.Goexit (https://golang.org/pkg/runtime/#Goexit)?

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 24, 2017

Exactly, yes.

@randall77

This comment has been minimized.

Copy link
Contributor

commented Mar 24, 2017

I don't agree with calling Goexit. If a goroutine is blocked forever, we can just garbage collect it without changing any semantics (and maybe sample it or something, for the bug case). It would never have run again, so there isn't an observable difference except for memory footprint. Why would it call Goexit, run defers, etc.? That seems like an unexpected behavior while waiting on a channel. And it feels like finalizers - we can only provide nebulous guarantees about whether we can detect it and how quickly we do so.

Holding a lock while doing a channel operation should be heavily frowned upon :(

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 24, 2017

@randall77 You've got a valid point. This would need to be discussed.

@fraenkel

This comment has been minimized.

Copy link
Contributor

commented Mar 24, 2017

How would goroutines waiting on Tickers or Timers work?
What about channels in channels?

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 24, 2017

@fraenkel Tickers and Timers theoretically always have a running goroutine behind them, counting the time, which eventually sends a signal. So, they're always active. Blocking on them doesn't garbage collect anything.

Channels in channels don't make any difference, if I'm not missing anything. If a channel is not reachable by any active goroutine, then no values will ever be sent or received on it.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 24, 2017

If this is implemented (and I think it indeed would be useful), deferred functions should not run. Essentially this should not change the observable behavior of any program except for memory usage.

I do feel like this feature would be very helpful, as it allows goroutines essentially to be used as continuations (from languages like Scheme) that contain some state that could be resumed or thrown away implicitly. A goroutine in the background managing some variables local to that goroutine could replace structs in cases where you really do not want to expose any internal structure, or you are doing some complex synchronization.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 24, 2017

In fact, it might be a good idea to panic if the garbage collector wants to collect an unreachable goroutine but finds that it has pending defers (or more conservatively, is holding a lock), since that's always a bug.

@randall77

This comment has been minimized.

Copy link
Contributor

commented Mar 24, 2017

I think this could work, but it would require folding goroutines into the marking phase of the GC.
Instead of treating all goroutines as roots, start with only active goroutines. (Active here means not blocked on a channel. Goroutines in syscalls are considered active for this.) Any time we mark a channel, mark all the goroutines waiting on that channel as active, and schedule their stacks for scanning. At the end of mark, any goroutines not marked active can be collected. As a bonus, we never need to scan their stack (and we can't, because then we would find a reference to the channel they are waiting on).

There are lots of tricky details:

  • Channels have direction - if an active goroutine has a send-only reference to a channel, it could never wake up a goroutine receive-waiting on that channel. The GC has no way to tell different channel references apart (send/receive/bidirectional is only a compile-time thing).
  • The GC needs to know that channels (or possibly something the channel references, like the goroutine struct) are special objects. It needs that information to know that when marking the channel, it needs to do extra work it doesn't need to do for normal objects. We'd need to allocate channels on segregated pages or something. or maybe marks similar to how we do finalizers?
  • Goroutines could be selecting on several channels at once. Any one of those could cause it to be active.

On a more meta level, this proposal would encourage using forever-blocking goroutines during the normal course of business. Right now they are considered a bug. The proposal suggests this is just a defense against memory leaks. But this is a slippery slope people are surely going to drive a mac truck down. Lazy-evaluating Haskell interpreter, anyone? Go isn't really a good fit for that, as a goroutine is a really heavy future implementation. But people will try anyway, and get frustrated with the performance.

@fraenkel

This comment has been minimized.

Copy link
Contributor

commented Mar 24, 2017

If one happens to write incorrect code which creates deadlocks or goroutines which are "dead" for some reason, one cannot easily determine what went wrong since the evidence is now magically removed.

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 25, 2017

@fraenkel It seems to me that deadlocks could easily be handled as a special case.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 25, 2017

@randall77 I already use goroutines heavily as an annoying future implementation that requires manual memory management. The whole point of goroutines being extremely lightweight and scalable is that doing this is not "really heavy". If goroutines are just used where you would use threads in Java, etc, then it has lost some of its utility.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 25, 2017

For example, I often use goroutines to express a sequence of numbers, such as an allocator for object IDs, to avoid intertwining the logic for incrementing counters, etc with the main loop where the business logic happens.

Far more frequently, I use goroutines to manage an object that requires complex synchronization. Essentially, in the function that creates the object ("NewWhateverStruct(...)" etc), a goroutine will be spun up in the background that communicates with the methods through channels and does all the actual work. This can include objects that do not manage external resources; a large in-memory thread-safe database for example. Currently, users of such an object must call a "Close()" method or something to kill the goroutines running in the background, which is annoying and easy to mess up, especially when the object may be referenced many times throughout many goroutines.

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 25, 2017

@randall77, you could imagine doing exactly what you propose in the GC, but instead of doing anything else to kill the goroutine, just ~sync.Once a warning about a leaked goroutine to stderr. The runtime doesn't complain to stderr often, but this seems justified enough.

@docmerlin

This comment has been minimized.

Copy link

commented Mar 25, 2017

This would also allow some of the more common erlang patterns to be useful in go.

@egonelbre

This comment has been minimized.

Copy link
Contributor

commented Mar 25, 2017

It's possible to detect when a goroutine is blocked for more than some time by examining the stack (proof-of-concept https://github.com/egonelbre/antifreeze/blob/master/monitor.go). As such, one of the approaches could be that you can specify a time limit for how long a goroutine can be blocked.

With regards to the example, this is faster, shorter and doesn't require an additional goroutine:

type Numbers struct{ next int64 }
func (n *Numbers) Next() int { return int(atomic.AddInt64(&n.next, 1)) }

func main() {
	for i := 0; i < 10; i++ {
		var numbers Numbers
		for {
			num := numbers.Next()
			if num >= 1000 {
				break
			}
			fmt.Println(num)
		}
	}
}
@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 25, 2017

@egonelbre Using a timeout would be horrible, as it will either be too long or too short for some purposes. Especially accidentally timing out a goroutine which is still reachable would cause weird unpredictable bugs.

Integrating goroutine-collecting with the GC would make sure that all the collecting is completely "unobservable" without dumping stack traces. And as @docmerlin mentions, it does significantly increase the expressivity of Go compared to languages like Erlang with somewhat similar concepts of communicating lightweight processes.

@Merovius

This comment has been minimized.

Copy link

commented Mar 25, 2017

I was missing this for a long time too and still don't quite understand, why memory is considered a resource that a programmer should obviously not need to care about, while goroutines are (cf. "What about the cases where it actually is a bug to have the leak and the goroutine shouldn't be silently swept under the rug?" - I could just as easily ask "what about cases where a leaked pointer is actually a bug and the GC is sweeping that under the rug?").

That being said, I believe a) this proposal so far to be too hand-wavy. I think there are a lot of questions to be answered about the details of what goroutines can and can't be collected. And as this feature is only useful, once it's codified in the spec (otherwise a program can't rely on the collection, thus any actual use of it would still be incorrect), these questions would need good, complete but also concise answers.
And b) that indeed, context seems like a good enough solution to at least all cases I care about. In 99% of code, I need to plumb through a context anyhow and rely on it being cancelled upstack.

So, it would be cool, if this could happen, but the edge-cases should definitely be thought hard about.

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 25, 2017

@Merovius Fully agree. This proposal is not meant to be something complete, I was rather going for to see whether other people were missing this too.

I think it would be useful if some more people from the Go team stated their opinion on this (such as @griesemer, @robpike, @adg, etc. not mentioning @bradfitz, who already contributed).

If the proposal turns out to be reasonable and acceptable, then we might start thinking about the details in depth.

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 25, 2017

The @golang/proposal-review group meets on Mondays, but generally doesn't comment on proposals until they've been open with community input for a week.

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 25, 2017

@bradfitz Aaa, thanks, I didn't know that. Does that mean that this proposal will be discussed on the Monday in two weeks?

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 25, 2017

It depends on backlog.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 25, 2017

@Merovius context is similar to manually managing memory using free in my mind, especially in code that does not use the context package pervasively, or even for which a workflow involving context makes no sense (say, an in-memory data structure package)

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 25, 2017

@Merovius It seems like there's a very simple way of defining "unreachable":

  • A goroutine is unreachable if it is waiting on a resource that is unreachable (in the GC sense) from any other goroutine stack.
@faiface

This comment has been minimized.

Copy link
Author

commented Mar 26, 2017

@mattn I didn't quite get your point here... If the channel can be used later, than it's not unreachable/inactive, so the goroutine sending values on it wouldn't be collected. A channel is unreachable/inactive only in the case, when it's not reachable by following references/pointers from an active goroutine.

@mattn

This comment has been minimized.

Copy link
Member

commented Mar 26, 2017

@faiface

If the channel can be used later, than it's not unreachable/inactive

I just mean it's harder to detect whether the chan will be unreachable/inactive or not. (, without finalizer)

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 26, 2017

Indeed, if goroutines holding references to finalizers will not be collected even if they are blocked indefinitely, it would be very surprising behavior for users, since finalizers might even be set by code in other packages. Goroutine collection would not be a good idea if programmers have to guess whether or not the code they wrote leaks goroutines.

Finalizers are also often used to call free on C-managed memory, in which case running them does make sense when collecting goroutines. I propose that we simply say that the existence of finalizers do not matter, and when goroutines are collected they may be run even if they wouldn't be run otherwise. This slightly breaks compatibility, but literally every change to the GC slightly breaks compatibility w.r.t. when do finalizers run. The introduction of runtime.KeepAlive already broke compatibility to a greater degree, IMO.

@Merovius

This comment has been minimized.

Copy link

commented Mar 26, 2017

@bunsim As the current guarantee re finalizers basically are "they might not run, even if an object is unreachable", I disagree with your characterization. The guarantee allows running fewer or more finalizers of objects that are unreachable. It does not, however, allow running finalizers for objects that are (logically) reachable.

Anyway, I also don't think it's a particularly important breakage. Finalizers are broken anyway; and it seems we were fine adding just such a breakage recently (and introducing runtime.KeepAlive as a poor patch for that).

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 26, 2017

Exactly. runtime.KeepAlive already breaks the connection between "logical" reachability and finalizers running, so adding goroutine collection will almost certainly not break anybody's correctly written program.

@go101

This comment has been minimized.

Copy link

commented Mar 27, 2017

@Merovius

"they might not run, even if an object is unreachable"

Here, it is some different.
If the blocked goroutine doesn't exit, the finalizers will absolutely not run.
But if it exits, the finalizers may run. It is never expected if it really runs.

@go101

This comment has been minimized.

Copy link

commented Mar 27, 2017

Maybe the rare case is not worth keeping compatibility for it.

I have another simple example. Should the two new created gorotuines be collected or not?

package main

func f(c1, c2 chan int) {
	<-c1
	c2 <- 1
}

func main() {
	c1, c2 := make(chan int), make(chan int)
	go f(c1, c2)
	go f(c2, c1)
	
	// ...
}
@go101

This comment has been minimized.

Copy link

commented Mar 27, 2017

Looks the answer is yes.

Then how about this one?
The two new created gorotuines will not be collected just for there is a variable holds the channels?
or at least not collected unitil the g has been called?

package main

import "runtime"

func f(c chan int) {
	<- c
}

func g(c []chan int) {
	c[0] <- 1
}

func main() {
	c1, c2 := make(chan int), make(chan int)
	go f(c1)
	go f(c2)
	
	// ...
	
	cs := []chan int{make(chan int), c1, c2}
	go g(cs)
	
	// ...
	
	// ...
	runtime.KeepAlive(&cs)
}

@rsc rsc changed the title proposal: garbage collect goroutines blocked forever proposal: runtime: garbage collect goroutines blocked forever Mar 27, 2017

@RalphCorderoy

This comment has been minimized.

Copy link

commented Mar 28, 2017

I see many CLs on the stdlib to reduce the amount of garbage generated, continual effort on the GC to improve its performance and characteristics, and work on the compiler to improve its escape analysis. To collect blocked-forever goroutines would be encouraging more garbage to be created as the balance of when to use a goroutine shifts. Does it give any benefit that can't be obtained with a little bit of extra coding, e.g. select on a closing channel, or Context?

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 28, 2017

In situations where you are programming in an actor-style with large amounts of communicating goroutines, each acting as an object, then using closing channels or Context becomes very cumbersome, similar to manually managing memory. Sometimes it is nice to be able to program in a style centered entirely around goroutines communicating with channels.

I do agree that in most existing Go code there is no need, but I think that's precisely because it is difficult to write in such a style currently, so it is avoided in favor of things like explicit mutexes or abusing Context in cases where "request scope" makes no sense. Go is supposed to be about programming in a CSP style, not in a C-like "call free/unlock on everything" style (which admittedly Go makes a bit easier due to defer), and collecting garbage goroutines will make it a lot easier.

The improvement to the GC, IMO, instead should be a reason to worry less about garbage in Go. Back when Go had a fully stop-the-world GC taking pains to avoid garbage in your code made sense, but now it doesn't. I've already started to remove all of my usages of tricks like object pooling (which really often circumvents memory safety) unless profiling shows significant performance decrease. Go has a very state-of-the-art low-latency GC, and programming idioms should reflect it.

@rsc

This comment has been minimized.

Copy link
Contributor

commented Mar 28, 2017

One common question among new Go programmers is why there isn't a way to force kill a goroutine, like Unix kill -9, but for a single goroutine. The answer is that it would be too hard to program in that world, in fear that your goroutine might be killed at any moment. Each goroutine shares state with other goroutines.

What happens to the locks the goroutine holds? The invariants they protect may have been temporarily violated (that's what locks are for) and not yet reestablished. Releasing the lock will break the invariants permanently; not releasing the lock will deadlock the program.

What happens to wait groups waiting for that goroutine? If there's a deferred wg.Done, should it run? If the waitgroup just wants to know the goroutine is no longer exiting, maybe that's OK. But if the waitgroup has a semantic meaning like "all the work I kicked off is done", then it's probably not OK to report that the goroutine is done when in fact it's not really done, just killed.

What happens to the other goroutines that goroutine is expected to communicate with? Maybe the goroutine was about to send a result on a channel. The killer can possibly send the result on the killed goroutine's behalf, but how can the killer know whether the goroutine completed its own send or not before being killed?

For all these reasons and more, Go requires that if you want a goroutine to stop executing, you communicate that fact to the goroutine and let it stop gracefully.

Note that even in Unix, where processes don't share memory, you still end up with problems like in a pipeline p1 | p2 when p2 is killed and p1 keeps trying to write to it. At least in that case the write system call has a way to report errors (in contrast to operations like channel sends or mutex acquisitions), but all the complexity around SIGPIPE exists because too many programs still didn't handle errors correctly in that limited context.

The proposal in this issue amounts to "have the GC kill permanently blocked goroutines". But that raises all the same questions, and it's a mistake for all the same reasons.

More fundamentally, the GC's job is to provide the illusion of infinite memory by reclaiming and reusing memory in ways that do not break that illusion, so that the program behaves exactly as if it had infinite memory and never reused a byte. For all the reasons above, killing a goroutine would very likely break that illusion. If defers are run, or locks are released, now the program behaves differently. If the stack is reclaimed and that happens to cause finalizers to run, now the program behaves differently.

Perhaps worst of all, collecting blocked goroutines would mean that when your whole program deadlocks, there are no goroutines left! So instead of a very helpful snapshot of how all the goroutines got stuck, you get a print reporting a deadlock and no additional information. Deadlocks are today the best possible concurrency bug: when they happen, the program stops and sits there waiting for you to inspect it. Contrast that with race conditions, where the eventual crash happens maybe billions of instructions later and you have to find some way to reconstruct what might have gone wrong. If the GC discards information about deadlocked goroutines, even for partial deadlocks, this fantastic property of deadlocks - that they are easy to debug because everything is sitting right there waiting for you - goes out the window.

Debuggability is the same reason we don't do tail call optimization: when something goes wrong we want to have the whole stack that identifies how we got to the code in question. The useful information discarded by tail call optimization is nothing compared to the useful information discarded by GC reclaiming blocked goroutines.

On top of all these problems, it's actually very difficult in most cases to reliably identify goroutines that are blocked forever. So this optimization would very likely not fire often, making it a rare event. The last thing you want is for this super-subtle behavior that can make your program behave in mysterious ways only happen rarely.

I just don't see collecting permanently blocked goroutines happening in any form. There is a vanishingly small intersection between the set of situations where you even identify the opportunity reliably and the set of situations where discarding the goroutines is safe and doesn't harm debugging.

The GC could address both of these problems - correctness and debuggability - by reclaiming goroutines but being careful to keep around any state required so that it looks like they're still there: don't run defers, record all stack pointers to keep those objects and any finalizers reachable (now or in the future) from those objects live, record a text stack trace to show in the eventual program crash, and so on. But this is really just compression, not collection, since some information must still be preserved. Effort spent compressing leaked memory is probably wasted: better to make it easier for programmers to find and fix leaks instead.

@rsc rsc closed this Mar 28, 2017

@bradfitz

This comment has been minimized.

Copy link
Member

commented Mar 28, 2017

@rsc,

Effort spent compressing leaked memory is probably wasted: better to make it easier for programmers to find and fix leaks instead

Thoughts on my comment to Keith above?
#19702 (comment)

@rsc

This comment has been minimized.

Copy link
Contributor

commented Mar 29, 2017

@bradfitz, I don't think this is worthwhile. The detection rate is so low that it won't be worth the complexity. Leaked goroutines don't even matter until there are a lot of them; people who care can watch runtime.NumGoroutines() against a limit they choose, and that will be much better at detecting. Or just take a look at /debug/pprof once in a while.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 29, 2017

@rsc I don't think this should be about "killing" goroutines. Such a change will never change the observable behavior of any program except for memory usage. It is not about killing "infinitely blocked" goroutines, or deadlock detection, but rather collecting "unreachable" goroutines. A goroutine that's infinitely blocked but has pointer on its stack to a waitgroup that somebody else is waiting on is obviously not unreachable.

Nothing will change from the perspective of programmers using existing Go idioms, except perhaps debugging, which can easily be solved by adding a flag to GODEBUG's GC optiosn that prints something when goroutines are collected. The only things goroutine collecting would add are new idioms / design patterns. And some other languages do indeed collect threads in exactly this unobservable way to enable more design freedom.

Of course, it's fine if the larger Go community feels that idioms such as using background goroutines to synchronize an object don't belong in Go, but my point was that this proposal isn't about magically fixing memory leaks in existing programs, but rather to enable programs that haven't been written yet to be written.

@rsc

This comment has been minimized.

Copy link
Contributor

commented Mar 29, 2017

@bunsim I understand your motivation was for new patterns. But I don't believe you can split the hairs well enough here to collect only the goroutines that aren't stuck due to bugs. And you shouldn't have to set a GODEBUG flag (that breaks the idioms you want to enable!) after the fact to get useful information about your deadlocked program.

@faiface

This comment has been minimized.

Copy link
Author

commented Mar 29, 2017

@rsc When you or anybody else have dealt with goroutines stuck forever, how often was it a bug that wouldn't be solved by garbage collecting that goroutine? If it's often the case, could you give any example?

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 29, 2017

The GODEBUG flag wouldn't break the idioms, it would just print "# of goroutines collected this cycle" as it already does for bytes etc. And you already can totally disable the GC using debugging options anyway.

Nobody is suggesting some weird heuristic to collect only goroutines that aren't stuck due to bugs. It's perfectly okay to collect ones that are stuck due to bugs, it only loses us some debugging info we can recover by simply disabling the GC. And a large amount of existing "buggy code" will simply be the correct way of doing things in the future.

The whole argument sounds suspiciously like "we shouldn't use a GC because it hides bugs from Valgrind".

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

commented Mar 29, 2017

I want to stress @rsc 's point that if this is to change the way that people write Go code, then it is essential that it be clear when goroutines will be collected, and that goroutines will be reliably collected in that state. That seems to me to be difficult. It's a lot harder to tell when a goroutine is blocked than it is to tell when memory is not referenced. The canonical case here is a goroutine blocked reading from or writing to a channel, but the goroutine itself is holding a reference to the channel; for example, it may be an argument to the function that blocked, and therefore be on the goroutine's stack. We need to reliably detect not that the channel is unreferenced, but that the only references to the channel are from memory that is only visible to the blocked goroutine. That seems hard.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 29, 2017

@ianlancetaylor That doesn't seem especially harder than GC in general, though? It seems about as difficult as implementing a weak-map (delete from the map if the only references are from memory rooted at the map), though admittedly it isn't something Go has.

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

commented Mar 29, 2017

Weak pointers are easier: you have a pointer type that the GC explicitly ignores (except that it gets updated when an object gets moved, for systems with a moving GC, and it gets cleared when an object is freed.)

What we need for this is something different: given a goroutine G1 blocked on a channel, we need to run a GC from all roots except G1, and see whether we marked the channel. That is straightforward but too inefficient to actually use. I don't know an efficient way to implement this. Perhaps there is one.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 30, 2017

@ianlancetaylor What if we run the GC only from the roots of running/runnable goroutines, and then collect all goroutines blocked on channels we didn't mark? (Of course, with some caveats with timers, etc) This should collect everything in one go, though it's might be hard to integrate it into Go's concurrent GC.

@rsc

This comment has been minimized.

Copy link
Contributor

commented Mar 30, 2017

There are many times when a goroutine will be blocked on a channel in a data structure that is accessible to other goroutines if they follow the right chain of pointers in memory, but none of them will. That case and many other related ones fundamentally fool any GC-based algorithm, so that in many cases goroutines will not be collected, and the conditions will be very unclear, depending potentially on optimizations in the compiler that move data between stack and heap, and maybe other optimizations as well. The result will be that it's fairly unpredictable when a blocked goroutine would be collected vs not.

As Ian said, "it is essential that it be clear when goroutines will be collected, and that goroutines will be reliably collected in that state."

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 31, 2017

@rsc There are many times when a data structure is inside a datastructure that's accessible from a goroutine if it follows the right chain of pointers in memory, but it never will. So the conditions at which memory will be collected is not clear and GC is "fooled", so garbage collection is fundamentally undecidable. I'm sure you could even construct a scenario where freeing an object at the "right" moment means solving the halting problem. Better use free()!

I've programmed extensively in Racket, a language where unreachable blocked threads are collected, and it is definitely not "fairly unpredictable" whether a blocked thread will be collected. There are several common design patterns that involve blocked threads being collected, and in each case it's very clear why it's being collected, and in other cases you still close them down manually.

In Go's case, we wouldn't be removing any code to shut goroutines down at all in most existing code; we would simply have new idioms in which whether goroutines are collected or not is very obvious. This proposal isn't intended to deprecate goroutine termination devices like tombs, etc, but rather to allow usage of goroutines to implement not exactly thread-like constructs such as coroutines and generators.

@nullchinchilla

This comment has been minimized.

Copy link

commented Mar 31, 2017

Just as an example, this proposal would allow users not to call Stop() on time.Ticker. Calling stop on objects that don't manage external resources should be the GC's job.

Essentially, this proposal can make "is this object implemented by a background goroutine" a well-encapsulated implementation detail. You might be able to implement time.Ticker without needing Stop() by some magic fiddling with the runtime and scheduler, but whether you do that, or simply use a goroutine that sleeps periodically, shouldn't be exposed in the form of "does it leak memory unless you stop it".

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

commented Apr 1, 2017

The issue of calling Stop on a time.Ticker is actually a different problem. Implementing this suggestion would not solve it. The problem there is that a time.Ticker has an entry in the runtime timer table, and there is nothing that removes that entry even if the time.Ticker is garbage collected.

@nullchinchilla

This comment has been minimized.

Copy link

commented Apr 1, 2017

@ianlancetaylor Ah, I always thought the reason was that time.Ticker was backed by a sleeping goroutine in a loop. But if this proposal is implemented such an implementation would indeed avoid the need of calling Stop.

@golang golang locked and limited conversation to collaborators Apr 1, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.