-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
decodeRequest does not respect withSizeLimit #2137
Comments
|
I tested this and it can indeed crash a server. The following shell commands can create a large enough gzip file and send it to the server: |
|
Thanks for the report. We'll look into it. |
|
The idea used to be that by basing everything on streaming, these kind of issues are less likely. However, there are two main parts where streams are loaded into memory:
I think we might want limits at different stages:
I propose the following changes:
|
|
@raboof wdyt? |
|
The problem is somewhat that we might end up with a zoo of limits that might interact in non-obvious or unforeseen ways. The good thing about the basic streaming nature of Akka HTTP is that it can efficiently handle any amount of data -- if it's consumed in a streaming fashion using backpressure. Ideally, the only thing that would need to be limited would be consumers that aggregate data. If we now introduce all those limits, we hinder well-written streaming use cases because they have to deactivate all those limits. On the other hand, a web server is also always a battle field where "defense in depth" might be appropriate. |
|
For the time being here's a workaround which can be used safely: def safeDecodeRequest(maxBytes: Long): Directive0 =
decodeRequest & mapRequest(_.mapEntity {
// decodeRequest will create chunked entity when it decodes something.
// This adds the missing limit support.
case c: HttpEntity.Chunked ⇒ c.copy(chunks = HttpEntity.limitableChunkSource(c.chunks))
case e ⇒ e
}) & withSizeLimit(maxBytes) |
|
Safe workarounds for Java and Scala here: https://gist.github.com/jrudolph/2be2e6fcde5f7f395b1dacdb6b70baf7 |
|
We made the potential DoS problem public at https://akka.io/blog/news/2018/08/30/akka-http-dos-vulnerability-found. Thanks, @tewe and @TheEmacsShibe for reporting. Please note, that we prefer getting security reports through special channels as explained in our guidelines. This will ensure it will get immediate attention and we are able to publish a fix before disclosure. |
|
We are currently working on these solutions:
|
When using decodeRequest to handle
Content-Encoding: gzipany withSizeLimit applies to the compressed size, not the uncompressed size. This might facilitate DoS attacks.The reason might be that
decodeRequestWithusestransformDataByteswhich creates a newHttpEntitythat is not limitable.The text was updated successfully, but these errors were encountered: