New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: FreeBSD build failed with net.inet.tcp.blackhole=2 #28883

Open
sternix opened this Issue Nov 20, 2018 · 13 comments

Comments

Projects
None yet
5 participants
@sternix

sternix commented Nov 20, 2018

What version of Go are you using (go version)?

$ go version
go version go1.11.2 freebsd/amd64

Does this issue reproduce with the latest release?

Yes

What operating system and processor architecture are you using (go env)?

go env Output
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/sternix/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="freebsd"
GOOS="freebsd"
GOPATH="/home/sternix/go"
GOPROXY=""
GORACE=""
GOROOT="/opt/go/1_11_2/go"
GOTMPDIR=""
GOTOOLDIR="/opt/go/1_11_2/go/pkg/tool/freebsd_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build323464270=/tmp/go-build -gno-record-gcc-switches"

What did you do?

i want to build golang (master) from source
i follow the instructions from https://golang.org/doc/install/source

% git rev-parse --short HEAD
90777a3

What did you expect to see?

ALL TESTS PASSED

What did you see instead?

Failed: exit status 1

With default net.inet.tcp.blackhole=0 setting its compiled successfully
but with
sudo sysctl net.inet.tcp.blackhole=2
failed with errors as attached

net.inet.tcp.blackhole: Do not send RST on segments to closed ports

i test build with FreeBSD 11.2 amd64 and FreeBSD 12.0-RC1 amd64 with same results

Thanks.

go_build.txt

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Nov 20, 2018

CC @mikioh

Errors in net package are:

--- FAIL: TestVariousDeadlines (5.01s)
    timeout_test.go:880: 1ns run 1/1
    timeout_test.go:905: for 1ns run 1/1, good client timeout after 54.24µs, reading 0 bytes
    timeout_test.go:915: for 1ns run 1/1, server in 101.638µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64749: write tcp4 127.0.0.1:64748->127.0.0.1:64749: write: connection reset by peer
    timeout_test.go:880: 2ns run 1/1
    timeout_test.go:905: for 2ns run 1/1, good client timeout after 4.398µs, reading 0 bytes
    timeout_test.go:915: for 2ns run 1/1, server in 111.412µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64752: write tcp4 127.0.0.1:64748->127.0.0.1:64752: write: broken pipe
    timeout_test.go:880: 5ns run 1/1
    timeout_test.go:905: for 5ns run 1/1, good client timeout after 19.057µs, reading 0 bytes
    timeout_test.go:915: for 5ns run 1/1, server in 144.639µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64753: write tcp4 127.0.0.1:64748->127.0.0.1:64753: write: broken pipe
    timeout_test.go:880: 50ns run 1/1
    timeout_test.go:905: for 50ns run 1/1, good client timeout after 4.887µs, reading 0 bytes
    timeout_test.go:915: for 50ns run 1/1, server in 92.354µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64754: write tcp4 127.0.0.1:64748->127.0.0.1:64754: write: broken pipe
    timeout_test.go:880: 100ns run 1/1
    timeout_test.go:905: for 100ns run 1/1, good client timeout after 3.909µs, reading 0 bytes
    timeout_test.go:915: for 100ns run 1/1, server in 99.684µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64755: write tcp4 127.0.0.1:64748->127.0.0.1:64755: write: broken pipe
    timeout_test.go:880: 200ns run 1/1
    timeout_test.go:905: for 200ns run 1/1, good client timeout after 3.421µs, reading 0 bytes
    timeout_test.go:915: for 200ns run 1/1, server in 109.456µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64756: write tcp4 127.0.0.1:64748->127.0.0.1:64756: write: broken pipe
    timeout_test.go:880: 500ns run 1/1
    timeout_test.go:905: for 500ns run 1/1, good client timeout after 3.91µs, reading 0 bytes
    timeout_test.go:915: for 500ns run 1/1, server in 142.684µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64757: write tcp4 127.0.0.1:64748->127.0.0.1:64757: write: broken pipe
    timeout_test.go:880: 750ns run 1/1
    timeout_test.go:905: for 750ns run 1/1, good client timeout after 20.035µs, reading 0 bytes
    timeout_test.go:915: for 750ns run 1/1, server in 99.683µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64758: write tcp4 127.0.0.1:64748->127.0.0.1:64758: write: broken pipe
    timeout_test.go:880: 1µs run 1/1
    timeout_test.go:905: for 1µs run 1/1, good client timeout after 777.435µs, reading 384852 bytes
    timeout_test.go:915: for 1µs run 1/1, server in 839.005µs wrote 458752: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64759: write tcp4 127.0.0.1:64748->127.0.0.1:64759: write: broken pipe
    timeout_test.go:880: 5µs run 1/1
    timeout_test.go:905: for 5µs run 1/1, good client timeout after 30.785µs, reading 0 bytes
    timeout_test.go:915: for 5µs run 1/1, server in 126.071µs wrote 32768: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64760: write tcp4 127.0.0.1:64748->127.0.0.1:64760: write: broken pipe
    timeout_test.go:880: 25µs run 1/1
    timeout_test.go:905: for 25µs run 1/1, good client timeout after 468.611µs, reading 237568 bytes
    timeout_test.go:915: for 25µs run 1/1, server in 578.068µs wrote 327680: readfrom tcp4 127.0.0.1:64748->127.0.0.1:64761: write tcp4 127.0.0.1:64748->127.0.0.1:64761: write: broken pipe
    timeout_test.go:880: 250µs run 1/1
    timeout_test.go:905: for 250µs run 1/1, good client timeout after 12.97599ms, reading 7913472 bytes
    timeout_test.go:919: for 250µs run 1/1, timeout waiting for server to finish writing
FAIL
FAIL	net	156.367s

@ianlancetaylor ianlancetaylor changed the title from build: FreeBSD build failed with net.inet.tcp.blackhole=2 to net: FreeBSD build failed with net.inet.tcp.blackhole=2 Nov 20, 2018

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Nov 20, 2018

Why do you want that setting?

It's not clear to me that this is fixable. The tests expect TCP to behave normally, which I think is reasonable. Note that the installed Go toolchain will still work, even though these tests failed.

@sternix

This comment has been minimized.

sternix commented Nov 20, 2018

Hi, this is recommended setting for server security. Yes i have the error while build golang and golang can build without this setting but if you inspect the error you can see deadline errors, my main problem is deadline errors.

@sternix sternix closed this Nov 20, 2018

@sternix sternix reopened this Nov 20, 2018

@sternix

This comment has been minimized.

sternix commented Nov 20, 2018

Hi @ianlancetaylor, error is not limited to net package, if you scroll down the file attached you can see runtime error like

--- FAIL: TestNetpollDeadlock (60.01s)
crash_test.go:95: testprognet NetpollDeadlock exit status: exit status 2
crash_test.go:409: output does not start with "done\n":
dialing
SIGQUIT: quit
PC=0x457e0e m=0 sigcode=65537

    goroutine 0 [idle]:
    runtime.sys_umtx_op(0x61de70, 0xf, 0x18, 0x0, 0x429664, 0xc000064150, 0x100000010, 0x0, 0x0, 0x7fffffffe5a0, ...)
    	/home/sternix/src/go/src/runtime/sys_freebsd_amd64.s:21 +0x1e
    runtime.futexsleep1(0x61de70, 0xc000000000, 0xffffffffffffffff)
    	/home/sternix/src/go/src/runtime/os_freebsd.go:162 +0x5d
    runtime.futexsleep.func1()
    	/home/sternix/src/go/src/runtime/os_freebsd.go:150 +0x3a
    runtime.futexsleep(0x61de70, 0x7fff00000000, 0xffffffffffffffff)
    	/home/sternix/src/go/src/runtime/os_freebsd.go:149 +0x5d
    runtime.notesleep(0x61de70)
    	/home/sternix/src/go/src/runtime/lock_futex.go:151 +0x9f
    runtime.stoplockedm()
    	/home/sternix/src/go/src/runtime/proc.go:2076 +0x8a
    runtime.schedule()
    	/home/sternix/src/go/src/runtime/proc.go:2477 +0x3a2
    runtime.park_m(0xc000000180)
    	/home/sternix/src/go/src/runtime/proc.go:2605 +0x9c
    runtime.mcall(0x0)
    	/home/sternix/src/go/src/runtime/asm_amd64.s:299 +0x5b
    
    goroutine 1 [select, locked to thread]:
    net.(*sysDialer).dialParallel(0xc0000e2100, 0x54eea0, 0xc000086010, 0xc000080780, 0x1, 0x1, 0xc000080790, 0x1, 0x1, 0x0, ...)
    	/home/sternix/src/go/src/net/dial.go:476 +0x322
    net.(*Dialer).DialContext(0xc000040e48, 0x54eea0, 0xc000086010, 0x52dbed, 0x3, 0x52f582, 0xf, 0x0, 0x0, 0x0, ...)
    	/home/sternix/src/go/src/net/dial.go:413 +0x4e3
    net.(*Dialer).Dial(...)
    	/home/sternix/src/go/src/net/dial.go:338
    net.Dial(0x52dbed, 0x3, 0x52f582, 0xf, 0x1, 0x8, 0x0, 0x0)
    	/home/sternix/src/go/src/net/dial.go:309 +0xa8
    main.NetpollDeadlockInit()
    	/home/sternix/src/go/src/runtime/testdata/testprognet/net.go:19 +0xa7
    main.registerInit(...)
    	/home/sternix/src/go/src/runtime/testdata/testprognet/main.go:20
    main.init.0()
    	/home/sternix/src/go/src/runtime/testdata/testprognet/net.go:13 +0x11c
    
    goroutine 20 [syscall]:
    os/signal.signal_recv(0x0)
    	/home/sternix/src/go/src/runtime/sigqueue.go:139 +0x9c
    os/signal.loop()
    	/home/sternix/src/go/src/os/signal/signal_unix.go:23 +0x22
    created by os/signal.init.0
    	/home/sternix/src/go/src/os/signal/signal_unix.go:29 +0x41
    
    goroutine 22 [IO wait]:
    internal/poll.runtime_pollWait(0x800b1bf00, 0x77, 0xc000066a80)
    	/home/sternix/src/go/src/runtime/netpoll.go:182 +0x55
    internal/poll.(*pollDesc).wait(0xc0000e2198, 0x77, 0x54ee00, 0xc000090280, 0xc0000e2180)
    	/home/sternix/src/go/src/internal/poll/fd_poll_runtime.go:87 +0x9a
    internal/poll.(*pollDesc).waitWrite(...)
    	/home/sternix/src/go/src/internal/poll/fd_poll_runtime.go:96
    internal/poll.(*FD).WaitWrite(...)
    	/home/sternix/src/go/src/internal/poll/fd_unix.go:498
    net.(*netFD).connect(0xc0000e2180, 0x54ee60, 0xc000090280, 0x0, 0x0, 0x54db00, 0xc0000e0180, 0x0, 0x0, 0x0, ...)
    	/home/sternix/src/go/src/net/fd_unix.go:152 +0x27e
    net.(*netFD).dial(0xc0000e2180, 0x54ee60, 0xc000090280, 0x54f440, 0x0, 0x54f440, 0xc000084a50, 0x0, 0xc000041b78, 0x4cf59e)
    	/home/sternix/src/go/src/net/sock_posix.go:149 +0xff
    net.socket(0x54ee60, 0xc000090280, 0x52dbed, 0x3, 0x1c, 0x1, 0x0, 0x0, 0x54f440, 0x0, ...)
    	/home/sternix/src/go/src/net/sock_posix.go:70 +0x1ab
    net.internetSocket(0x54ee60, 0xc000090280, 0x52dbed, 0x3, 0x54f440, 0x0, 0x54f440, 0xc000084a50, 0x1, 0x0, ...)
    	/home/sternix/src/go/src/net/ipsock_posix.go:141 +0x141
    net.(*sysDialer).doDialTCP(0xc0000e2100, 0x54ee60, 0xc000090280, 0x0, 0xc000084a50, 0x506e80, 0x6387f8, 0x0)
    	/home/sternix/src/go/src/net/tcpsock_posix.go:65 +0xc2
    net.(*sysDialer).dialTCP(0xc0000e2100, 0x54ee60, 0xc000090280, 0x0, 0xc000084a50, 0xf9f0c9fb61, 0x11c62909, 0x11c629090002f5b8)
    	/home/sternix/src/go/src/net/tcpsock_posix.go:61 +0xd7
    net.(*sysDialer).dialSingle(0xc0000e2100, 0x54ee60, 0xc000090280, 0x54dee0, 0xc000084a50, 0x0, 0x0, 0x0, 0x0)
    	/home/sternix/src/go/src/net/dial.go:565 +0x348
    net.(*sysDialer).dialSerial(0xc0000e2100, 0x54ee60, 0xc000090280, 0xc000080780, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
    	/home/sternix/src/go/src/net/dial.go:533 +0x221
    net.(*sysDialer).dialParallel.func1(0x54ee60, 0xc000090280, 0xc000080701)
    	/home/sternix/src/go/src/net/dial.go:454 +0x95
    created by net.(*sysDialer).dialParallel
    	/home/sternix/src/go/src/net/dial.go:469 +0x226
    
    goroutine 24 [select]:
    net.(*netFD).connect.func2(0x54ee60, 0xc000090280, 0xc0000e2180, 0xc000076240, 0xc0000761e0)
    	/home/sternix/src/go/src/net/fd_unix.go:129 +0xba
    created by net.(*netFD).connect
    	/home/sternix/src/go/src/net/fd_unix.go:128 +0x256
    
    goroutine 34 [IO wait]:
    internal/poll.runtime_pollWait(0x800b1be30, 0x77, 0xc0000f2000)
    	/home/sternix/src/go/src/runtime/netpoll.go:182 +0x55
    internal/poll.(*pollDesc).wait(0xc000106018, 0x77, 0x54ee00, 0xc0000ec000, 0xc000106000)
    	/home/sternix/src/go/src/internal/poll/fd_poll_runtime.go:87 +0x9a
    internal/poll.(*pollDesc).waitWrite(...)
    	/home/sternix/src/go/src/internal/poll/fd_poll_runtime.go:96
    internal/poll.(*FD).WaitWrite(...)
    	/home/sternix/src/go/src/internal/poll/fd_unix.go:498
    net.(*netFD).connect(0xc000106000, 0x54ee60, 0xc0000ec000, 0x0, 0x0, 0x54dae0, 0xc000108000, 0x0, 0x0, 0x0, ...)
    	/home/sternix/src/go/src/net/fd_unix.go:152 +0x27e
    net.(*netFD).dial(0xc000106000, 0x54ee60, 0xc0000ec000, 0x54f440, 0x0, 0x54f440, 0xc000084a80, 0x0, 0xc000102b78, 0x4cf59e)
    	/home/sternix/src/go/src/net/sock_posix.go:149 +0xff
    net.socket(0x54ee60, 0xc0000ec000, 0x52dbed, 0x3, 0x2, 0x1, 0x0, 0x0, 0x54f440, 0x0, ...)
    	/home/sternix/src/go/src/net/sock_posix.go:70 +0x1ab
    net.internetSocket(0x54ee60, 0xc0000ec000, 0x52dbed, 0x3, 0x54f440, 0x0, 0x54f440, 0xc000084a80, 0x1, 0x0, ...)
    	/home/sternix/src/go/src/net/ipsock_posix.go:141 +0x141
    net.(*sysDialer).doDialTCP(0xc0000e2100, 0x54ee60, 0xc0000ec000, 0x0, 0xc000084a80, 0x506e80, 0x6387f8, 0x0)
    	/home/sternix/src/go/src/net/tcpsock_posix.go:65 +0xc2
    net.(*sysDialer).dialTCP(0xc0000e2100, 0x54ee60, 0xc0000ec000, 0x0, 0xc000084a80, 0xfa02b9c22a, 0x23b5ede9, 0x23b5ede900102db8)
    	/home/sternix/src/go/src/net/tcpsock_posix.go:61 +0xd7
    net.(*sysDialer).dialSingle(0xc0000e2100, 0x54ee60, 0xc0000ec000, 0x54dee0, 0xc000084a80, 0x0, 0x0, 0x0, 0x0)
    	/home/sternix/src/go/src/net/dial.go:565 +0x348
    net.(*sysDialer).dialSerial(0xc0000e2100, 0x54ee60, 0xc0000ec000, 0xc000080790, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
    	/home/sternix/src/go/src/net/dial.go:533 +0x221
    net.(*sysDialer).dialParallel.func1(0x54ee60, 0xc0000ec000, 0xc0000ee000)
    	/home/sternix/src/go/src/net/dial.go:454 +0x95
    created by net.(*sysDialer).dialParallel
    	/home/sternix/src/go/src/net/dial.go:480 +0x558
    
    goroutine 35 [select]:
    net.(*netFD).connect.func2(0x54ee60, 0xc0000ec000, 0xc000106000, 0xc0000fc0c0, 0xc0000fc060)
    	/home/sternix/src/go/src/net/fd_unix.go:129 +0xba
    created by net.(*netFD).connect
    	/home/sternix/src/go/src/net/fd_unix.go:128 +0x256
    
    rax    0x4
    rbx    0x61dd20
    rcx    0x18
    rdx    0x0
    rdi    0x61de70
    rsi    0xf
    rbp    0x7fffffffe578
    rsp    0x7fffffffe528
    r8     0x0
    r9     0x100
    r10    0x7d
    r11    0x3
    r12    0x1
    r13    0xc000076120
    r14    0xc000074360
    r15    0x0
    rip    0x457e0e
    rflags 0x247
    cs     0x43
    fs     0x13
    gs     0x1b

FAIL
FAIL runtime 68.622s

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Nov 20, 2018

My assumption is that if we fix this in the net package, somehow, then all the other problems will be fixed. In any case the net package seems like the place to start.

I haven't looked but I would guess that the tests are reporting deadline errors because when one end of the socket is closed, the other end is not being closed as expected. That seems an inevitable consequence of setting this kernel parameter.

Do you have any suggestion for what we could do to change this?

@sternix

This comment has been minimized.

sternix commented Nov 20, 2018

@ianlancetaylor No i haven't any suggestion, you are the master.

@paulzhol

This comment has been minimized.

Member

paulzhol commented Nov 20, 2018

The blackhole setting will disable the kernel from replying with RST packets on SYN requests (if the sysctl is set to 1) or for any packet (if sysctl is set to 2) arriving on a closed port.
I don't see where we are closing the server side other than defer ls.teardown() in the TestVariousDeadlines test.
Could this be just a regular timeout / deadlock?

Maybe unrelated but is pasvch := make(chan result) unbuffered on purpose?

@paulzhol

This comment has been minimized.

Member

paulzhol commented Nov 20, 2018

a few additional data points:

  • With net.inet.tcp.blackhole=1, the test passes the same as with net.inet.tcp.blackhole=0.

  • Looking at the -v output of a successful vs failed test with net.inet.tcp.blackhole=2, the successful ones will have lines with write: connection reset by peer in addition to the write: broken pipe ones:

timeout_test.go:880: 500ms run 1/3
timeout_test.go:905: for 500ms run 1/3, good client timeout after 531.471608ms, reading 533069824 bytes
timeout_test.go:915: for 500ms run 1/3, server in 531.509809ms wrote 533135360: readfrom tcp4 127.0.0.1:16447->127.0.0.1:16499: write tcp4 127.0.0.1:16447->127.0.0.1:16499: write: broken pipe

timeout_test.go:880: 500ms run 2/3
timeout_test.go:905: for 500ms run 2/3, good client timeout after 531.854827ms, reading 533536768 bytes
timeout_test.go:915: for 500ms run 2/3, server in 531.875195ms wrote 533626880: readfrom tcp4 127.0.0.1:16447->127.0.0.1:16500: write tcp4 127.0.0.1:16447->127.0.0.1:16500: write: connection reset by peer

These are errors from the server listener handling a client connection doing io.Copy:

// The server, with no timeouts of its own,
// sending bytes to clients as fast as it can.
go func() {
	t0 := time.Now()
	n, err := io.Copy(c, neverEnding('a'))
	dt := time.Since(t0)
	c.Close()
	pasvch <- result{n, err, dt}
}()

The write: broken pipe error is due to the client closing the socket on its side (by sending FIN+ACK).

I'm not sure what is causing the write: connection reset by peer error, or why net.inet.tcp.blackhole=2 is suppressing it somehow.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Nov 21, 2018

"connection reset by peer" means that the TCP connection received a RST packet. The definition of blackhole=2 is reported to be "Do not send RST on segments to closed ports." So the connection seems clear.

@mikioh

This comment has been minimized.

Contributor

mikioh commented Nov 21, 2018

As @ianlancetaylor described above, the test case TestVariousDeadlines does a cheat, I mean, using TCP in-band signaling (TCP RST segment exchange) to wait for the server goroutine shutdown, to break the continuous data transfer on a passive-open connection at the server goroutine side. So a fix would be just making an out-of-band signaling but If someone wants a temporally fix for Go 1,12, please replace "tcp" with "unix" in testVariousDeadlines and keep this issue open. Will fix this with other test case issues in Go 1.13.

Not sure about TestNetpollDeadlock but perhaps it might also rely on the in-band signaling of the underlying connection during the connection setup phase. It's probably hard to fix and it should be another issue, another way to fix the flakiness.

@mikioh mikioh removed the OS-FreeBSD label Nov 21, 2018

@mikioh

This comment has been minimized.

Contributor

mikioh commented Nov 21, 2018

PS: This is not a FreeBSD-specific issue. It`s pretty easy to reproduce this on Linux with funny configuration for netfilter and conntrack, PF on OpenBSD, etc.

@gopherbot

This comment has been minimized.

gopherbot commented Nov 21, 2018

Change https://golang.org/cl/150618 mentions this issue: net: make TestVariousDeadlines detect client closing its connection in the server handler

@paulzhol

This comment has been minimized.

Member

paulzhol commented Nov 21, 2018

The definition of blackhole=2 is reported to be "Do not send RST on segments to closed ports." So the connection seems clear.

Just to clarify the above point.

https://www.freebsd.org/cgi/man.cgi?query=blackhole

Normal behaviour, when a TCP SYN segment is received on a port where there is no socket accepting connections, is for the system to return a RST segment, and drop the connection. The connecting system will see this as a "Connection refused".
By setting the TCP blackhole MIB to a numeric value of one, the incoming
SYN segment is merely dropped, and no RST is sent, making the system appear as a blackhole. By setting the MIB value to two, any segment arriving
on a closed port is dropped without returning a RST.

I don't think either of the two sides is considered a "port where there is no socket accepting connections", the server is in LISTENING state, while the client after closing it's side of the socket is in the 'CLOSING' state.

Update:
I was wrong, the client side socket is considered a "port where there is no socket accepting connections".
I've set sysctl net.inet.tcp.log_in_vain=2 during the test, to get the following logs:

Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:24793 tcpflags 0x10<ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:24793 tcpflags 0x18<PUSH,ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:24793 tcpflags 0x18<PUSH,ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:17240 tcpflags 0x10<ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:17240 tcpflags 0x18<PUSH,ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:41380 tcpflags 0x10<ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:41380 tcpflags 0x18<PUSH,ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:34519 tcpflags 0x10<ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:34519 tcpflags 0x18<PUSH,ACK>; tcp_input: Connection attempt to closed port
Nov 21 21:51:37 nexus kernel: TCP: [127.0.0.1]:38763 to [127.0.0.1]:10914 tcpflags 0x10<ACK>; tcp_input: Connection attempt to closed port

These are the server(listener) goroutines keeping hammering at the closed client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment