Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

net/http: Denial of Service Protection in Go HTTP Servers #2093

gopherbot opened this Issue Jul 23, 2011 · 14 comments


None yet
5 participants

by tav@espians.com:

Directly exposing a web server written with Go's http package is extremely risky
at the moment as you can be easily subject to denial of service attacks.

The most obvious attack vectors for a web server are:

* Request/header lines being too long
* Header size being too large
* Request body being too large

Right now, only the first of these seems to be handled by the http package. As
far as I can figure out, the textproto.Reader.readLineSlice calls used by the
http.ReadRequest implicitly returns an error for lines over the length of the
underlying bufio.Reader (4096 bytes?). Whilst this seems perfectly adequate for
my own needs, it would be nice if this were documented somewhere.

However, there seems to be no protection against a large header with a bazillion
lines though. A configurable field, e.g. MaxHeaderLines, on the http.Server struct
would be very handy for this. This could then be passed along to the
textproto.Reader.ReadMIMEHeader call made by http.ReadRequest, and it in
turn could return an error if the number of lines exceeds the given length.

As for the request.Body, the maximum size of the request body needs to be
configurable on a per request basis. Right now, there are implicit limits of 10MB
for application/x-www-form-urlencoded forms. It would be nice if this was
documented somewhere — sorry if I missed it.

But, even the 10MB limit doesn't seem to completely help, because request.Body
as set in http.readTransfer is only limited to the Content-Length provided by the
request. And in the body.Close call, any remaining content in the request body
is copied to ioutil.Discard. So it seems a malicious party could send an extremely
large request body and use up the CPU cycles on a server.. ?

The maxMemory parameter to http.Request.ParseMultipartForm doesn't seem to
really offer much protection. Because, at best, an attacker could just use up all
available disk space by sending large request bodies. But, worse, it seems that
memory could also be exhausted because the multipart.Part.populateHeaders
call makes use of textproto.Reader.ReadMIMEHeader which could be of arbitrary

It would be nice if a LimitRequestBody function field could be added to the
http.Server struct. An ideal signature for it would be:

    func(req *http.Request, contentLength int64) (limit int64, error []byte)

Then, inside http.readTransfer calls, LimitRequestBody could be called with the
current request and Content-Length value. If the function returned an error []byte,
then a HTTP "400 Bad Request" would be sent with the given error as the body.
Else, the returned limit would be used to limit the new &body{} object. And if
LimitRequestBody wasn't set, perhaps limit it to some arbitrary size, e.g. 2GB?

Having such a configurable function would both provide protection against denial
of service attacks as well as allow for limits to be set on a per request basis, e.g.
perhaps you want to allow authenticated users to upload 1GB files, but deny all
anonymous users from sending any request bodies...

In summary, extending http.Server with:

    MaxHeaderLines int
    LimitRequestBody func(*http.Request, int64) (int64, []byte)

And making the respective changes to http.ReadRequest, http.readTransfer, and
textproto.Reader.ReadMIMEHeader should provide significantly better protection
against denial of service attacks.

And, finally, perhaps the order of these two blocks http.Request.ParseMultipartForm
should be reversed?

    if r.MultipartForm != nil {
        return nil
    if r.MultipartForm == multipartByReader {
        return os.NewError("http: multipart handled by MultipartReader")

I'm using 2d7eda309c95 tip.

Thanks for hearing me out, tav

robpike commented Jul 23, 2011

Comment 1:

Owner changed to @bradfitz.

Status changed to Accepted.


bradfitz commented Jul 23, 2011

Comment 2:

Thanks for filing this.  This sort of stuff's been on my mind but I hadn't gotten around
to it or filed specific bugs, so glad there is one now.
It is better than it used to be, though... we used to not even have read/write timeouts,
or a Server struct to hang new options off of.  :)
I run a public webserver with Go's http package so I'm quite motivated to fix this all.
Feel free to add related stuff to this bug.

bradfitz commented Aug 9, 2011

Comment 3:

Cap on header size is now in.  It defaults to 1MB but can be set per-Server:
I don't see the need for LimitRequestBody, though.  That's per-Handler to decide.
Keeping this bug open, though, as there's still more to be done.

Comment 4 by tav@espians.com:

Thanks for enabling the cap on the header size Brad. Much appreciated!
As for the handler enforcing a limit on the body, are you imagining that
request.Body.Reader would be wrapped within the handler by an
If so, could you make the http.body struct be public, i.e. http.Body. Then
a handler could turn the request.Body ReadCloser into an http.Body and
wrap/modify the Body.Reader...
Or were you imagining something else?
Thanks again, tav

Comment 5 by tav@espians.com:

By the way, the bug in http.Request.ParseMultipartForm is still present...

bradfitz commented Aug 10, 2011


bradfitz commented Aug 22, 2011

Comment 7:

Change out for review to limit request body size:

bradfitz commented Aug 24, 2011

Comment 8:

And in:
It's not on by default yet, though.
For now, install a wrapper handler or at the beginning of your handler:
   req.Body = http.MaxBytesReader(rw, req.Body, 5 << 20)
Keeping this bug open still, though.

rsc commented Oct 6, 2011

Comment 9:

What's the status of this?

bradfitz commented Oct 6, 2011

Comment 10:

The main thing I still want to do before closing this is change the Server's behavior of
trying really hard to maintain a keep-alive connection by fully consuming the client's
body, regardless of its size.  Currently an attacker can do a multi-gigabyte POST to
some path, have the server reply with 401 Unauthorized, and then happily slurp up
gigabytes just to throw them away.
I'd like to change the behavior so that if the request body has not been fully consumed
when WriteHeader is sent, it'll just be a Connection: close response instead.
Alternatively, a threshold size under which we will slurp.
I keep meaning to go look up what RFC 2616 says (if anything) about whether responses
before requests are fully consumed are legal or not.  I can't remember.

davecheney commented Oct 7, 2011

Comment 11:

They are, in certain circumstances. eg, post with a 1Gb body can
result in a server sending a 413 if it isn't prepared to accept that
much data. However, that presupposes that the client send a
Content-Length: header, or better sent an Expect: header. With
Transfer-Encoding: chunked, things get less clear. RFC 2616 says the
server MAY close the connection. However in the presence of things
like pipelining, imho the server MUST close the connection, not just
shutdown it's send side to make it clear the request isn't acceptable.

bradfitz commented Oct 14, 2011

Comment 12:

I think this will the final fix for this bug, unless anybody can think of other DoS

bradfitz commented Oct 15, 2011

Comment 13:

This issue was closed by revision 5079129.

Status changed to Fixed.

Comment 14 by tav@espians.com:

Thanks for the great work Brad. It looks great! And, given the significance of it,
perhaps it warrants a blog post on blog.golang.org?
Thanks again, tav

@bradfitz bradfitz was assigned by gopherbot Oct 15, 2011

@mikioh mikioh changed the title from Denial of Service Protection in Go HTTP Servers to net/http: Denial of Service Protection in Go HTTP Servers Jan 22, 2015

@tav tav referenced this issue in bradfitz/http2 Apr 29, 2015


Denial of Service Protection #52

@gopherbot gopherbot locked and limited conversation to collaborators Jun 24, 2016

This issue was closed.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.