Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: occasional nil pointer dereference in (*Client).send #37598

noneymous opened this issue Mar 2, 2020 · 8 comments

net/http: occasional nil pointer dereference in (*Client).send #37598

noneymous opened this issue Mar 2, 2020 · 8 comments


Copy link

@noneymous noneymous commented Mar 2, 2020

What version of Go are you using (go version)?

go 1.11 and 1.14

Does this issue reproduce with the latest release?


What operating system and processor architecture are you using (go env)?

go env Output
$ go env
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\USER\AppData\Local\go-build
set GOENV=C:\Users\USER\AppData\Roaming\go\env
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\go
set GOPROXY=,direct
set GOROOT=c:\go
set GOTOOLDIR=c:\go\pkg\tool\windows_amd64
set GCCGO=gccgo
set AR=ar
set CC=gcc
set CXX=g++
set GOMOD=
set CGO_CFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\USER\AppData\Local\Temp\go-build702297486=/tmp/go-build -gno-record-gcc-switches

What did you do?

Wrote a web crawler using net/http to crawl large networks with lots of different web services of all ages and technologies. The crawler mostly works great, but ocasionally I'm running into nil pointer dereference panics. I cannot reproduce the issue. Looking at the stacktrace it seems to be burried within net/http and independent of my code.

2020-02-29 18:52:11.810 +0100 	Critical 	Scan scope crashed unexpectedly: runtime error: invalid memory address or nil pointer dereference
	| goroutine 4239417 [running]:
	| runtime/debug.Stack(0xa1f, 0xc001186d00, 0x1d)
	| 	C:/Go/src/runtime/debug/stack.go:24 +0xa4
	| my-project/log.StacktraceIndented(0xbc8784, 0x1, 0x52, 0x0)
	| 	D:/go/src/my-project/log/logger.go:347 +0x3b
	| my-project/scans/webenum.(*Scanner).Run.func1(0xc000481440, 0xc002e0fdb0)
	| 	D:/go/src/my-project/scans/webenum/webenum.go:175 +0x10f
	| panic(0xb2a700, 0xa88be0)
	| 	C:/Go/src/runtime/panic.go:967 +0x16b
	| net/http.send(0xc000317500, 0xcc32e0, 0xc00107e600, 0xbf8ec712e19c8a1c, 0x67b0b803359d, 0x10b9800, 0xc0005fe868, 0xbf8ec712e19c8a1c, 0x1, 0x0)
	| 	C:/Go/src/net/http/client.go:271 +0x503
	| net/http.(*Client).send(0xc0002dabd0, 0xc000317500, 0xbf8ec712e19c8a1c, 0x67b0b803359d, 0x10b9800, 0xc0005fe868, 0x0, 0x1, 0xc0013f0710)
	| 	C:/Go/src/net/http/client.go:176 +0x101
	| net/http.(*Client).do(0xc0002dabd0, 0xc000317500, 0x0, 0x0, 0x0)
	| 	C:/Go/src/net/http/client.go:699 +0x451
	| net/http.(*Client).Do(...)
	| 	C:/Go/src/net/http/client.go:567

C:/Go/src/net/http/client.go:271 is where the actual panic araises due to a nil pointer dereference, by accessing .Body on the response of resp, err = rt.RoundTrip(req). Obviously, for some reason, resp is nil, due to some unhanled error. So, execution continues until it ends up with a panic calling resp.Body. Here is a copy of the source code of client.go lines 252-275 for quick reference:

252:	resp, err = rt.RoundTrip(req)
253:	if err != nil {
254:		stopTimer()
255:		if resp != nil {
256:			log.Printf("RoundTripper returned a response & error; ignoring response")
257:		}
258:		if tlsErr, ok := err.(tls.RecordHeaderError); ok {
259:			// If we get a bad TLS record header, check to see if the
260:			// response looks like HTTP and give a more helpful error.
261:			// See
262:			if string(tlsErr.RecordHeader[:]) == "HTTP/" {
263:				err = errors.New("http: server gave HTTP response to HTTPS client")
264:			}
265:		}
266:		return nil, didTimeout, err
267:	}
268:	if !deadline.IsZero() {
269:		resp.Body = &cancelTimerBody{
270:			stop:          stopTimer,
271:			rc:            resp.Body,
272:			reqDidTimeout: didTimeout,
273:		}
274:	}
275:	return resp, nil, nil

Maybe some package expert has an idea, why rt.RoundTrip(req) might return nil without error? Obviously, this edge case should not happen, because resp is required down the road...

What did you expect to see?

I'm expecting anything but panics. Either an error or a normal response.

What did you see instead?

A panic

@noneymous noneymous changed the title net/http/client: Occasional nil pointer dereference in net/http/client: Occasional nil pointer dereference Mar 2, 2020
Copy link

@noneymous noneymous commented Mar 2, 2020

I searched the last 7 days of my logs for web servers where my crawler ran into this issue. In total 890 web services got crawled. With 8 of them the crawler ran into described issue. I could not reproduce the issue, so I searched for hints/similarities. The similarities I found among all of the them, were:

  • They communicate via HTTP (not HTTPs)
  • Microsoft IIS web servers
  • They respond with "401 unauthorized" and requested NTLM authentication (WWW-Authenticate: NTLM)

(Sorry, can't share the hosts, they are within an internal network.)

Copy link

@bcmills bcmills commented Mar 2, 2020

As far as I can tell, this can only happen if rt.RoundTrip returns a nil *Response alongside a nil error.

Is your program using a non-default RoundTripper, for example by setting it in the http.Client's Transport field?

@bcmills bcmills added this to the Backlog milestone Mar 2, 2020
@bcmills bcmills changed the title net/http/client: Occasional nil pointer dereference net/http: occasional nil pointer dereference in (*Client).send Mar 2, 2020
Copy link

@gopherbot gopherbot commented Mar 2, 2020

Change mentions this issue: net/http: verify RoundTripper invariants in the send function

gopherbot pushed a commit that referenced this issue Mar 2, 2020
Issue #37598 reports a nil-panic in *Client.send that can
only occur if one of the RoundTripper invariants is violated.
Unfortunately, that condition is currently difficult to diagnose: it
manifests as a panic during a Response field access, rather than
something the user can easily associate with an specific erroneous
RoundTripper implementation.

No test because the new code paths are supposed to be unreachable.

Updates #37598

Change-Id: If0451e9c6431f6fab7137de43727297a80def05b
Reviewed-by: Brad Fitzpatrick <>
Copy link

@noneymous noneymous commented Mar 3, 2020


we are:

  • configuring our own transport and client:
	return &http.Transport{
		TLSClientConfig:       utils.InsecureTlsConfigFactory(),
		TLSHandshakeTimeout:   timeout,
		ResponseHeaderTimeout: timeout * 2, // TCP connection succeeded, let's give the web app some more time
		DisableKeepAlives:     false,       // Keep-alive is required for NTLM authentication to work!
		Proxy:                 http.ProxyURL(proxy),
	// Create and return fresh client
	return &http.Client{
		Transport: transport,   // HTTP transport to use, may be a reused one with alive connections
		Timeout:   timeout * 2, // TCP connection succeeded, let's give the web app some more time
		// CheckRedirect: -> default is nil, which tells it to stop after 10 consecutive requests
  • Patching the CheckRedirect function to step through location redirects with our own code. Well, this might look funny to you, but this way we didn't need to touch the more complex roundtripper. Instead we are returning after each step and creating a new request for the next step.
	// Patch redirect function to not automatically handle location redirects. We will take care of this manually
	// to give us more insight into what's happening, e.g. to count redirects and detect www-authenticate responses.
	client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
		return http.ErrUseLastResponse // Return first response directly. Might contain location redirect.
  • Wrapping the transport by ntlmssp.Negotiatior (for NTLM authentication). I think they are using the default roundtripper but doing some of their own roundtripping logic first. Now I'm thinking the problem could also be in (or in associateion with) their package. Though it is not a shady unknown one...
	negotiatedTransport := ntlmssp.Negotiator{
		RoundTripper: client.Transport,
	client.Transport = negotiatedTransport

I added exhaustive logging to our code to find out how far and which paths it executes in the flawed situations. I need to wait until i setp into the situation again, but then I should see better what happened, e.g. whether it made use of the ntlmssp negotiator already, etc...

Copy link

@noneymous noneymous commented Mar 3, 2020

Copy link

@noneymous noneymous commented Mar 3, 2020

Ok it seems to be a bug in go-ntlmssp and there seems to be a magic update: Azure/go-ntlmssp#24

So I will update to that new version and hope the bug disappears. I guess you can close this ticket for now, I'd come back if the issue comes up again. Thank's guys for your time and help!

@noneymous noneymous closed this Mar 3, 2020
Copy link

@bcmills bcmills commented Mar 3, 2020

Azure/go-ntlmssp#24 seems to be fixing another instance of the problem described in #377.

Notably, detects the bug:

go-ntlmssp$ git status
HEAD detached at 4a21cbd
Untracked files:
  (use "git add <file>..." to include in what will be committed)

nothing added to commit but untracked files present (use "git add" to track)

go-ntlmssp$ shadow ./...
/tmp/tmp.pMusf64dqg/go-ntlmssp/negotiator.go:87:9: declaration of "err" shadows declaration at line 30

I suspect that that means other tools like staticcheck could also spot the bug.

So, at least we're aware of this class of bug, but it's a shame that the shadow check isn't in more widespread use.

(FYI @ianthehat)

Copy link

@noneymous noneymous commented Mar 3, 2020

Nice, I didn't know that tool. But also, we don't have build verification/automation yet (where it would fit into perfectly), as it would be overkill in our current state. Great to have that tool, but we can also assume that there will never be a majority knowing and applying a certain one. Best case, the language doesn't allow or mitigate it. Otherwise, the first step is to know this kind of bug :) Anyways, not complaining...

@bcmills bcmills removed the WaitingForInfo label Mar 3, 2020
@golang golang locked and limited conversation to collaborators Mar 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants