New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent the serialization of header containing a line terminator #3717
Comments
Good point, thanks for reporting (and sorry for the delay), @gontard. I don't have a good idea how to check this cheaply. I don't think we want that check for our own modeled headers. It would certainly make sense for |
Thanks for your answer.
Is it a performance concern?
IDK the akka-http internals, but naively i would check that systematically. |
Refs akka#3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking
Refs akka#3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking
Refs #3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking
#3922 adds a general solution that checks headers for invalid characters while rendering and if occurred drops those headers. |
…a#3922) Refs akka#3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking (cherry picked from commit afecb31)
…a#3922) Refs akka#3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking (cherry picked from commit afecb31)
…a#3922) Refs akka#3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking (cherry picked from commit afecb31)
…a#3922) Refs akka#3717 To avoid accidental response splitting when applications put unsanitized request data into outgoing responses. We previously discussed introducing a check in the header models which could avoid overhead during rendering. There are a lot of header models, however, which all had to be checked one by one for potential issues. This would require a significant effort now to check all the existing headers and would also make it easy to reintroduce problems later on when headers are added or changed. The approach here requires that all headers are rendered using the new overload of `Rendering.~~`. When that method is used, we render the header and check afterwards that the rendered data does not contain any CR or LF characters. The performance overhead seems negligible, the new method did not turn up in profiling. An explanation why going over the rendered header data directly after it has been rendered is so cheap could be that - it has good locality because the data has just been written - it has good branch prediction because test will almost never fail - even if it has linear cost for big headers, those big headers will also have significant memory allocation cost which might dwarf the extra checking (cherry picked from commit afecb31)
akka-http should throw an
Exception
when an header value containing a line terminator (LF) is sent in a responseFor example
We spotted this issue because our envoy reverse proxy complains of an
Invalid HTTP header field was received
.The text was updated successfully, but these errors were encountered: