Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decodeRequest does not respect withSizeLimit #2137

Closed
tewe opened this issue Aug 6, 2018 · 10 comments
Closed

decodeRequest does not respect withSizeLimit #2137

tewe opened this issue Aug 6, 2018 · 10 comments
Labels
3 - in progress Someone is working on this ticket needs-backport t:routing Issues related to the routing DSL t:server Issues related to the HTTP server
Milestone

Comments

@tewe
Copy link

tewe commented Aug 6, 2018

When using decodeRequest to handle Content-Encoding: gzip any withSizeLimit applies to the compressed size, not the uncompressed size. This might facilitate DoS attacks.

The reason might be that decodeRequestWith uses transformDataBytes which creates a new HttpEntity that is not limitable.

@ghost
Copy link

ghost commented Aug 26, 2018

I tested this and it can indeed crash a server. The following shell commands can create a large enough gzip file and send it to the server:

dd if=/dev/zero bs=1M count=6000 | pv | gzip -9 > foo.gz
curl -v -i http://example.com/post-endpoint -H'Content-Encoding: gzip' --data-binary @foo.gz

@jrudolph jrudolph added 2 - pick next Used to mark issues which are next up in the queue to be worked on. The tag is non-binding t:server Issues related to the HTTP server t:routing Issues related to the routing DSL needs-backport labels Aug 28, 2018
@jrudolph jrudolph added this to the 10.1.5 milestone Aug 28, 2018
@jrudolph
Copy link
Member

Thanks for the report. We'll look into it.

@jrudolph jrudolph added 3 - in progress Someone is working on this ticket and removed 2 - pick next Used to mark issues which are next up in the queue to be worked on. The tag is non-binding labels Aug 29, 2018
@jrudolph
Copy link
Member

jrudolph commented Aug 29, 2018

The idea used to be that by basing everything on streaming, these kind of issues are less likely. However, there are two main parts where streams are loaded into memory:

  • toStrict (in all variants)
  • PredefinedFromEntityUnmarshallers.byteStringUnmarshaller which blindly collects everything it gets and which is the base unmarshaller for basically all other strict unmarshallers like those used for the formData directive.
  • StrictForm.unmarshaller which is the basis of the formField directives and which uses a different code path to strictify the request body

toStrict docs at least contain a warning that this might not be a good idea but for the unmarshallers it is unexpected.

I think we might want limits at different stages:

  • withSizeLimit / max-content-length as input limits
  • decodeRequest and everything that potentially inflates data
  • toStrict should have a limit
  • PredefinedFromEntityUnmarshallers.byteStringUnmarshaller and other marshallers that might collect data should have limits

I propose the following changes:

  • add config settings for each of those cases with a reasonable default limit (which isn't unlimited) like 10MB per request
  • change decodeRequest to add another stage which checks the limit
    • maybe add a variant of decodeRequest that allows to change limit as a shortcut
  • same for toStrict directives
  • limiting the unmarshallers is the hardest because they are static and currently allow no possibility for configuration. With a bit of fiddling we could change the internals to pass configuration when they are invoked. Adding limits to decodeRequest and toStrict would restrict the problem for now, so maybe changing the unmarshallers could be postponed.
  • Create a documentation page listing limits and explaining rationale.

@jrudolph
Copy link
Member

@raboof wdyt?

@jrudolph
Copy link
Member

The problem is somewhat that we might end up with a zoo of limits that might interact in non-obvious or unforeseen ways. The good thing about the basic streaming nature of Akka HTTP is that it can efficiently handle any amount of data -- if it's consumed in a streaming fashion using backpressure.

Ideally, the only thing that would need to be limited would be consumers that aggregate data.

If we now introduce all those limits, we hinder well-written streaming use cases because they have to deactivate all those limits.

On the other hand, a web server is also always a battle field where "defense in depth" might be appropriate.

@jrudolph
Copy link
Member

For the time being here's a workaround which can be used safely:

def safeDecodeRequest(maxBytes: Long): Directive0 =
  decodeRequest & mapRequest(_.mapEntity {
    // decodeRequest will create chunked entity when it decodes something.
    // This adds the missing limit support.
    case c: HttpEntity.Chunked  c.copy(chunks = HttpEntity.limitableChunkSource(c.chunks))
    case e                      e
  }) & withSizeLimit(maxBytes)

@jrudolph
Copy link
Member

Safe workarounds for Java and Scala here:

https://gist.github.com/jrudolph/2be2e6fcde5f7f395b1dacdb6b70baf7

@jrudolph
Copy link
Member

We made the potential DoS problem public at https://akka.io/blog/news/2018/08/30/akka-http-dos-vulnerability-found. Thanks, @tewe and @TheEmacsShibe for reporting. Please note, that we prefer getting security reports through special channels as explained in our guidelines. This will ensure it will get immediate attention and we are able to publish a fix before disclosure.

@jrudolph
Copy link
Member

jrudolph commented Sep 4, 2018

We are currently working on these solutions:

jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
jrudolph pushed a commit that referenced this issue Sep 5, 2018
And apply configurable default limit otherwise.

Fixes #268. Refs #2137.
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
Synesso pushed a commit to Synesso/akka-http that referenced this issue Sep 5, 2018
And apply configurable default limit otherwise.

Fixes akka#268. Refs akka#2137.
Synesso pushed a commit to Synesso/akka-http that referenced this issue Sep 5, 2018
@raboof
Copy link
Member

raboof commented Sep 5, 2018

Between #2186 and #2185, I think we can consider this issue mitigated

@jrudolph jrudolph closed this as completed Sep 5, 2018
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
And apply configurable default limit otherwise.

Fixes akka#268. Refs akka#2137.

(cherry picked from commit d0cee31)
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
…'toStrict' (akka#2186)

And apply configurable default limit otherwise.

Fixes akka#268. Refs akka#2137.

(cherry picked from commit d0cee31)
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
…'toStrict' (akka#2186)

And apply configurable default limit otherwise.

Fixes akka#268. Refs akka#2137.

(cherry picked from commit d0cee31)
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
…'toStrict' (akka#2186)

And apply configurable default limit otherwise.

Fixes akka#268. Refs akka#2137.

(cherry picked from commit d0cee31)
jrudolph pushed a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
jrudolph added a commit to jrudolph/akka-http that referenced this issue Sep 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - in progress Someone is working on this ticket needs-backport t:routing Issues related to the routing DSL t:server Issues related to the HTTP server
Projects
None yet
Development

No branches or pull requests

3 participants