New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: macOS Sierra builders spinning #18751

Open
bradfitz opened this Issue Jan 22, 2017 · 34 comments

Comments

@bradfitz
Member

bradfitz commented Jan 22, 2017

I finally (sorry) set up macOS Sierra VMs last week.

They keep hanging in infinite loops. I just attached to one of them to see what's going on:

screen shot 2017-01-22 at 10 22 00 am

The go_bootstrap install -v std cmd command is spinning.

Using dtrace -p <pid>, I see an endless stream of:

select(0x0, 0x0, 0x0, 0x0, 0xC82003DF00) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0xC82003DF00) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0xC82003DF00) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0xC82003DF00) = 0 0
select(0x0, 0x0, 0x0, 0x0, 0xC82003DF00) = 0 0

The GOROOT_BOOTSTRAP is the Go 1.4 branch with the Sierra fixes. (it barely makes it a second into the build otherwise)

The VMs are running under VMWare ESXi, just like all the OS X 10.8, 10.10, and 10.11 builders, which are all fine.

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 22, 2017

(Other comment deleted. The SIGQUIT shown in the deleted post was the buildlet process, not the spinning one.)

@josharian

This comment has been minimized.

Contributor

josharian commented Jan 22, 2017

Probably unrelated, but just in case...yesterday, the net/http tests starting hanging on my laptop, and required a reboot to get passing again. The test that was hanging was TestTransportPersistConnReadLoopEOF. I'm afraid I don't have the stack traces anymore. Again, probably unrelated, many of the net/http benchmarks fail on Sierra with 'too many open files'.

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 22, 2017

@josharian, this is just go install, so we're not even running tests yet. But I could imagine the highly-parallel net/http tests using lots of FDs. Please open a separate bug for that.

@aclements

This comment has been minimized.

Member

aclements commented Jan 23, 2017

@bradfitz, can you kill -QUIT the go_boostrap process directly to get a traceback? Or if that doesn't work maybe send SIGQUIT to the whole make.bash process group (like ^[ at the terminal would).

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 23, 2017

I did kill -QUIT the go_bootstrap process directly to no avail, several times. It didn't appear to quit. Maybe that's telling in itself.

I then also tried a Ctrl-\ at parent (the buildlet) and got a backtrace of only the buildlet (backtrace paths were like /home/bradfitz instead of the expected /Users/gopher)

Trying again now.

@dhananjay92

This comment has been minimized.

Member

dhananjay92 commented Jan 23, 2017

Possibly related: #17161 . Infinite retry was added to solve that issue. From the second stacktrace, it looks like it may be going into that infinite retry loop.

@aclements

This comment has been minimized.

Member

aclements commented Jan 23, 2017

It didn't appear to quit. Maybe that's telling in itself.

That's too bad. A few other things come to mind: Does it freeze even if you run a trivial subcommand like, say, go_bootstrap version? Can you dtrace go_bootstrap from the start? Can this be bisected?

Possibly related: #17161 . Infinite retry was added to solve that issue. From the second stacktrace, it looks like it may be going into that infinite retry loop.

I would expect to see Mach semaphore calls in the dtrace if this were the case, though maybe those are at the wrong level in the OS X kernel for dtrace? (I've always found this confusing.) The second traceback was just from the buildlet, so I don't think it's related. That's also exactly what I would expect a normal parked P's stack to look like.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Jan 23, 2017

Can you add a -x to the go_bootstrap command near the end of make.bash so that we can see where it is hanging?

In case it's not obvious the select calls are from calls to runtime.usleep, but unfortunately we don't know the duration of the sleep so we don't know precisely which call is looping. It could just be the sysmon thread.

@quentinmit

This comment has been minimized.

Contributor

quentinmit commented Jan 23, 2017

@aclements dtrace only shows traditional BSD-style syscalls, not Mach calls.

@minux

This comment has been minimized.

Member

minux commented Jan 23, 2017

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 23, 2017

Most likely explanation is you are bootstrapping with a stale Go 1.4, without the Sierra time fix patches.

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 23, 2017

@rsc, I addressed that in the top comment. That was a problem when I first set up the builder (cloning from 10.11), but it very quickly revealed itself to be a problem.

But I just double-checked anyway.

I logged into a 10.12 VM, extracted https://storage.googleapis.com/golang/go1.4-bootstrap-20161024.tar.gz from the https://golang.org/doc/install/source page into a new directory and diffed the VM's baked-in $HOME/go1.4 against the new extracted dir. They're the same.

So the Sierra machine's $HOME/go1.4 is a modern one.

Unless https://storage.googleapis.com/golang/go1.4-bootstrap-20161024.tar.gz is bogus, but people have been successfully using it.

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 23, 2017

@josharian your hang is #18541. I think that's not the issue here (no network involved).

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 23, 2017

A little concerned about the go1.4-bootstrap tar not having e39f2da (fix unsigned shift "warnings"), but I just tried flipping my go1.4 to the CL from the bootstrap tar and it works OK. I still think this seems like the time problem, but clearly not that exact time problem (sorry for missing that in the original post). Maybe something is different when running under VMware.

@bradfitz can you send me instructions to connect to the VMs off-list?

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 23, 2017

@bradfitz can you send me instructions to connect to the VMs off-list?

Done.

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 23, 2017

OK, I'm running make.bash in a loop waiting for a hang to poke at. In the meantime, I've remembered #18540, which may be the same thing - unexplained hang waiting for child.

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 23, 2017

I have run 61 successful make.bash in a row on that VM.

I also restarted the buildlet. Lots of Go 1.6 SIGSEGVs, not surprising. On Go 1.8, one hang in crypto/ssh during netPipe, which does:

func netPipe() (net.Conn, net.Conn, error) {
	listener, err := net.Listen("tcp", "127.0.0.1:0")
	if err != nil {
		return nil, nil, err
	}
	defer listener.Close()
	c1, err := net.Dial("tcp", listener.Addr().String())
	if err != nil {
		return nil, nil, err
	}

	c2, err := listener.Accept()
	if err != nil {
		c1.Close()
		return nil, nil, err
	}

	return c1, c2, nil
}

That looks a little suspect. I don't know if it's guaranteed that the Dial will return before the Accept executes. I think there's a second instance of this happening right now, but it's a 10 minute timeout.

Still waiting for a hang like in the original report.

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 24, 2017

The buildlet is hung right now running 'go list', but the specific binary is Go 1.6.4, which doesn't have the time fixes.

It looks to me like the buildlet is being used for the subrepo tests as well, including "subrepos on Go 1.6" (right now it is stuck on go list golang.org/x/blog/...). That's just not going to work on Sierra and we should avoid running those tests on the Sierra buildlet.

@gopherbot

This comment has been minimized.

gopherbot commented Jan 24, 2017

CL https://golang.org/cl/35643 mentions this issue.

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 24, 2017

@rsc, ugh, fun. The JSON at https://build.golang.org/?mode=json doesn't give me quite enough info to work with. I get:

		{
			"repo": "blog",
			"revision": "59665c2db910d030fff2c1197aa31438eedd296e",
			"parentRevisions": [
				"165bdf8fabc3238454c4083e9d32e156a3440a5d"
			],
			"goRevision": "aa1e69f3fc21795b6fab531a07008e0744ffe5bf",
			"date": "2016-12-13T21:33:03Z",
			"branch": "master",

Which is "master" for the "blog" repo, but the listed goRevision of aa1e69f is the Go 1.6 branch.

I guess I need to query the git graph in the build system to figure out whether that revision has the Sierra time fix or not.

gopherbot pushed a commit to golang/build that referenced this issue Jan 25, 2017

cmd/coordinator: don't build pre-Go 1.7 code on Sierra builders
Updates golang/go#18751

Change-Id: Iadd7dded079376a9bf9717ce8071604cee95d8ef
Reviewed-on: https://go-review.googlesource.com/35643
Reviewed-by: Kevin Burke <kev@inburke.com>
Reviewed-by: Russ Cox <rsc@golang.org>
@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 26, 2017

Okay, the Sierra builder is only doing Go 1.7+ branches now, and isn't building subrepos.

And the buildlet binary is verified to be built with Go 1.8.

What's up with all the TLS timeout errors? See the third column at https://build.golang.org/

It's quite red:
screen shot 2017-01-26 at 2 45 08 pm

e.g. https://build.golang.org/log/e0a86210e3754182956958d6e8e76b5069c7788e

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jan 26, 2017

FWIW, the macOS firewall is off that VM.

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 27, 2017

Maybe the kernel module is buggy even when the firewall is turned off? This looks like the usual bad firewall:

goroutine 186 [IO wait, 2 minutes]:
net.runtime_pollWait(0x17af590, 0x77, 0xc420051301)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc4200caa08, 0x77, 0x0, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitWrite(0xc4200caa08, 0xc42007ae38, 0x14cd0c0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/fd_poll_runtime.go:84 +0x34
net.(*netFD).connect(0xc4200ca9a0, 0x14cd0c0, 0xc42007ae38, 0x0, 0x0, 0x14c8900, 0xc4203cea80, 0x0, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/fd_unix.go:156 +0x24e
net.(*netFD).dial(0xc4200ca9a0, 0x14cd0c0, 0xc42007ae38, 0x14ce480, 0x0, 0x14ce480, 0xc420010e40, 0xc4202d8548, 0x11a3fce)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/sock_posix.go:140 +0xe9
net.socket(0x14cd0c0, 0xc42007ae38, 0x1366525, 0x3, 0x2, 0x1, 0x0, 0x0, 0x14ce480, 0x0, ...)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/sock_posix.go:92 +0x1a1
net.internetSocket(0x14cd0c0, 0xc42007ae38, 0x1366525, 0x3, 0x14ce480, 0x0, 0x14ce480, 0xc420010e40, 0x1, 0x0, ...)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/ipsock_posix.go:154 +0x129
net.doDialTCP(0x14cd0c0, 0xc42007ae38, 0x1366525, 0x3, 0x0, 0xc420010e40, 0x1517fd0, 0xc4202d8748, 0x1517fd0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/tcpsock_posix.go:62 +0xb9
net.dialTCP(0x14cd0c0, 0xc42007ae38, 0x1366525, 0x3, 0x0, 0xc420010e40, 0xc42007ae38, 0x40, 0x2)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/tcpsock_posix.go:58 +0xe4
net.dialSingle(0x14cd0c0, 0xc42007ae38, 0xc42001a080, 0x14cac80, 0xc420010e40, 0x0, 0x0, 0x0, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/dial.go:521 +0x479
net.dialSerial(0x14cd0c0, 0xc42007ae38, 0xc42001a080, 0xc420015330, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/dial.go:489 +0x220
net.(*Dialer).DialContext(0xc420051b88, 0x14cd0c0, 0xc42007ae38, 0x1366525, 0x3, 0xc420015310, 0xf, 0x0, 0x0, 0x0, ...)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/dial.go:371 +0x70b
net.(*Dialer).Dial(0xc4202d8b88, 0x1366525, 0x3, 0xc420015310, 0xf, 0xc4202d8b48, 0x0, 0x0, 0xc4202d8b00)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/dial.go:294 +0x75
crypto/tls.DialWithDialer(0xc4202d8b88, 0x1366525, 0x3, 0xc420015310, 0xf, 0xc4200d8180, 0x11a220a, 0xc420015300, 0x9)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/crypto/tls/tls.go:120 +0xb0
crypto/tls.Dial(0x1366525, 0x3, 0xc420015310, 0xf, 0xc4200d8180, 0xc4200785a0, 0xc4202ec000, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/crypto/tls/tls.go:170 +0xa6
crypto/tls.TestTLSUniqueMatches(0xc4200821a0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/crypto/tls/tls_test.go:313 +0x2be
testing.tRunner(0xc4200821a0, 0x137fa10)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/testing/testing.go:657 +0x96
created by testing.(*T).Run
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/testing/testing.go:697 +0x2ca

goroutine 187 [IO wait, 2 minutes]:
net.runtime_pollWait(0x17af650, 0x72, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc4200ca998, 0x72, 0x0, 0xc4204e4000)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc4200ca998, 0xffffffffffffffff, 0x0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).accept(0xc4200ca930, 0x0, 0x14c8700, 0xc4204e4000)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/fd_unix.go:430 +0x1e5
net.(*TCPListener).accept(0xc420156008, 0xc400000076, 0x1, 0xc42047de08)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/tcpsock_posix.go:136 +0x2e
net.(*TCPListener).Accept(0xc420156008, 0x17, 0xc4203ce9a0, 0x17, 0xc4203ce9a0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/net/tcpsock.go:228 +0x49
crypto/tls.TestTLSUniqueMatches.func1(0x14cc940, 0xc420156008, 0xc4200821a0, 0xc4200785a0)
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/crypto/tls/tls_test.go:296 +0x60
created by crypto/tls.TestTLSUniqueMatches
	/var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/go/src/crypto/tls/tls_test.go:309 +0xf7

Each goroutine is waiting for the other (in the network stack).

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 27, 2017

kextstat shows com.apple.nke.applicationfirewall is still loaded. I've only ever seen the firewall hang on new connections, and this is exactly it. If I run "telnet localhost 12" on the VM, it just hangs. On my laptop it immediately gets connection refused. The hang is what I saw the one time on my laptop with the firewall was screwed up. Rebooting fixed it. Not sure if that's a tenable solution here, or if not, what we should do instead.

The revdial issue earlier was also a network hang but it was a write of data, not a connect attempt. I've never seen the firewall hang during a write of data, but who knows.

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 27, 2017

Spent a while trying to get more information but the OS is too locked down. Rebooting.

@zchee

This comment has been minimized.

Contributor

zchee commented Feb 3, 2017

Hi,
I'm trying to solve the problem that would be caused by the setting of the firewall etc, but although it is quite another problem. machine-drivers/docker-machine-driver-xhyve#107
That's caused by Hypervisor.framework and vmnet.framework, so really another problem.

However, I do not know if it is better than dtrace, just grep Console.app is very useful for kernel level network debugging. but needs disable SIP because not handled some error if enable.
At least, can handle the /usr/lib/system/libsystem_network.dylib function level error.
Another example. Can get logs at the following levels.
https://gist.github.com/zchee/46f5d810bffa34c16a5866e60b6c516e
https://gist.github.com/adamreese/a82c10b39b438de7328cbc7f475aa61f
Add to search dialog to subsystem: com.apple.network, can filtering by subsystem(means mach kernel subsystem)

And, macOS Sierra has been added OS level log command. See /usr/bin/log or man log.
like sudo log show --info --debug --predicate 'subsystem == "com.apple.network"'. Also handled libsystem_network.dylib function level error.

Maybe you may be looking already because of your specialist. If so, sorry annoying.
But if you ran the only dtrace and it has not been caught correctly, it's worth seeing(but it's rudimentary debugging, might be not useful in this issue case)
I hope you find it informative.

@rsc rsc modified the milestones: Go1.9Maybe, Go1.8Maybe Feb 7, 2017

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Jun 28, 2017

@bradfitz What's the status of this issue?

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jun 28, 2017

@bradfitz bradfitz modified the milestones: Go1.10, Go1.9Maybe Jun 28, 2017

@bradfitz

This comment has been minimized.

Member

bradfitz commented Jun 28, 2017

Bumping to Go 1.10, but it'd be nice to understand why we can't run all.bash in a loop on Sierra without wedging the machine.

@aclements

This comment has been minimized.

Member

aclements commented Nov 8, 2017

Is this still an issue?

@bradfitz

This comment has been minimized.

Member

bradfitz commented Nov 8, 2017

Well, we worked around the issue by having the macOS Sierra VMs be single use. After each build, the VM is nuked and recloned/booted from its base image. It adds about 60 seconds of wasted time, but it's probably the right thing to do from an isolation standpoint anyway. But I assume this issue remains, that we can't do a bunch run all.bash in a loop on Sierra.

@rsc

This comment has been minimized.

Contributor

rsc commented Nov 9, 2017

Is it easy to try taking the nuke-the-VM step out? I found and fixed a big problem with Sierra stability vs all.bash a few weeks ago. After that I was able to run all.bash in a loop (4 loops in parallel in different directories, just to make things interesting) for 24 hours on a standard Sierra-image MacBook Pro. It would be interesting to know if the VMs are differently broken or had the same root cause.

@rsc rsc modified the milestones: Go1.10, Go1.11 Nov 22, 2017

gopherbot pushed a commit to golang/build that referenced this issue Dec 7, 2017

cmd/buildlet: always halt Mac VMs after builds, regardless of macOS v…
…ersion

We used to do this only by necessity on Sierra, due to Sierra issues,
but now that x/build/cmd/makemac is better, we can just do this all
the time. Doing it always is better anyway, to guarantee fresh
environments per build.

Also add the forgotten sudo in the Mac halt. The
env/darwin/macstadium/image-setup-notes.txt even calls out how to
setup password-free sudo to the images, but then I forgot to use it.
It only worked before (if it did?) because the process ended and
failed its heartbeat, at least some of the time. It's also possible it
was never working. The old reason that Sierra machines were
special-cased to reboot was reportedly fixed anyway, in
golang/go#18751 (comment).

Updates golang/go#9495

Change-Id: Iea21d7bc07467429cde79f4212c2b91458f8d8d8
Reviewed-on: https://go-review.googlesource.com/82355
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Jul 10, 2018

It's probably worth retrying this after all the work on #17490.

@ianlancetaylor ianlancetaylor modified the milestones: Go1.11, Go1.12 Jul 10, 2018

@dmitshur

This comment has been minimized.

Member

dmitshur commented Nov 2, 2018

Posting here since #18541 (which mentions this test more directly) is closed with:

Closing. #18751 tracks the more general Sierra weirdnesses.

I just got this net/http TestTransportPersistConnLeak test failure on my work Mac with 10.13.6 while running all.bash on CL 146860. It seems like a flaky test from issue search results, and the fact that I only got it once, but second time it passed.

Adding a data point here since it's not already mentioned in this issue.

Test Failure Output
--- FAIL: TestTransportPersistConnLeak (0.00s)
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58386->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58389->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58391->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58390->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58388->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58395->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58394->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58396->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58393->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58392->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58405->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58410->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58404->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58407->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58401->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58387->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58402->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58399->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58400->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58398->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58403->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58408->127.0.0.1:58385: read: connection reset by peer
    transport_test.go:1413: client fetch error: Get http://127.0.0.1:58385: read tcp 127.0.0.1:58409->127.0.0.1:58385: read: connection reset by peer
panic: Fail in goroutine after TestTransportPersistConnLeak has completed

goroutine 2061 [running]:
testing.(*common).Fail(0xc000376600)
	/Users/dmitshur/gotip/src/testing/testing.go:553 +0x128
testing.(*common).Errorf(0xc000376600, 0x1511423, 0x16, 0xc0003c1f98, 0x1, 0x1)
	/Users/dmitshur/gotip/src/testing/testing.go:645 +0x83
net/http_test.TestTransportPersistConnLeak.func2(0xc000285b00, 0xc00051cc60, 0xc000319e00, 0xc000376600, 0xc000319e80)
	/Users/dmitshur/gotip/src/net/http/transport_test.go:1413 +0xec
created by net/http_test.TestTransportPersistConnLeak
	/Users/dmitshur/gotip/src/net/http/transport_test.go:1409 +0x226
panic: Fail in goroutine after TestTransportPersistConnLeak has completed

goroutine 2070 [running]:
testing.(*common).Fail(0xc000376600)
	/Users/dmitshur/gotip/src/testing/testing.go:553 +0x128
testing.(*common).Errorf(0xc000376600, 0x1511423, 0x16, 0xc000579f98, 0x1, 0x1)
	/Users/dmitshur/gotip/src/testing/testing.go:645 +0x83
net/http_test.TestTransportPersistConnLeak.func2(0xc000285b00, 0xc00051cc60, 0xc000319e00, 0xc000376600, 0xc000319e80)
	/Users/dmitshur/gotip/src/net/http/transport_test.go:1413 +0xec
created by net/http_test.TestTransportPersistConnLeak
	/Users/dmitshur/gotip/src/net/http/transport_test.go:1409 +0x226
FAIL	net/http	0.589s
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment