New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mask CSRF tokens to mitigate BREACH attack #11729
Conversation
| @@ -0,0 +1,54 @@ | |||
| module ActionController | |||
| class AuthenticityToken | |||
| class << self | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't make this a use class methods. It seems most of the methods here revolve around the session object, and thus it would make sense to make AuthenticityToken be initialized with a readable attribute session as follows:
token = AuthenticityToken.new(session)
token.generate_masked
# ...
class AuthenticityToken
attr_reader :session
def initialize(session)
@session = session
end
def generate_masked
# ...
end
# ...
endOn the same topic, based on the Single Responsibility Principle, perhaps you could make different classes for the token generator and the token itself:
token_builder = AuthenticityTokenGenerator.new(session)
token = token_builder.generate_maskedOf course, for consumers who only need the token object iself, the above can be simplified to:
token = AuthenticityTokenGenerator.new(session).generate_masked # => an AuthenticityToken objectOr an even simpler, with a class builder method:
token = AuthenticityTokenGenerator.generate(session) # => an AuthenticityToken objectThis way, the methods related to generating the token can be on the builder class, while the methods related to validating it can be on the token class itself.
The xor_byte_strings method can still be a class method because it's accessed both by builder and by token itself, and does not operate on a session.
|
it's SSL problem, isn't it. Can you imagine hassle of generating new token everytime? |
|
Doesn't look too bad. He is implementing the suggested solution. |
|
Would this have the side-effect of also protecting encrypted cookie sessions from this exploit as the ciphertext of the session cookie (containing the |
|
After thinking about it, wouldn't just adding random characters to the session every request (when using encrypted sessions) mitigate this attack entirely for all data in the session? Actually, on further reflection, would I be right in assuming that the session isn't at risk at all because HTTP headers aren't compressed? I guess this still effects the |
|
The paper states HTTP-level compression as one of the preconditions. That is something users may configure their web servers to do, so such users may want this proposed behavior in their rails apps. |
|
I would be worth having the masking/unmasking functions available as a library interface rather than baked into |
|
@waynerobinson The ciphertext of cookie-stored sessions is different every time anyway, because it uses a random IV during encryption. If you encrypt the same plaintext twice, you should get a different result each time since sending the same ciphertext over the wire tells the attacker that you sent the same data twice, which leaks information. |
|
Calling in the @NZKoz bat signal. |
|
We need something universal, maybe we should transfer csrf token as an additional cookie readable on client side. It will be a header, not included in response body |
|
@waynerobinson Nothing in the HTTP headers is at risk because they're not in the same compression context as the body (you're right; they're not compressed at all). @homakov Not sending the CSRF token each time would definitely mitigate the attack (or sending it in a header). The problem is that this will break non-XHR form posts, which include the CSRF token as the |
|
@egilburg Thanks for the comments; I'll make these changes. I'll also do some benchmarking; we definitely need to figure out if this is expensive or not. If the masking is cheap, I'd just as soon do it on every request and not make it a configuration option (or at least make it opt-out) - users should not want to opt out of masking unless there's a performance hit, or they're doing something really exotic / custom with CSRF tokens. |
|
One other option to improve performance / reduce complexity is to use a different technique to obfuscate the authenticity token. There's nothing special about the OTP / XOR masking algorithm suggested in the paper; we just need a way to randomly obfuscate the data on each request in a way that the server can check. So something like |
|
Please have a look, this should be a faster and better protection https://gist.github.com/homakov/6147227 |
yes, if JS is off we need something different |
|
You can't implement CSRF tokens as a cookie; the weaknesses of cookies is what CSRF tokens are supposed to prevent. If you send the token as a cookie, the browser will attach it to all requests to your server regardless of which origin they come from, defeating the point of CSRF protection. Plus, any security solution that relies on JavaScript should be considered a weak solution. |
|
If you're concerned about the performance of XOR (which might be a reasonable concern but someone should benchmark it and find out), then write it in C. |
|
@jcoglan i know how CSRF works at my fingertips :) You probably misunderstood what I proposed: instead of plain tag we put it into Set-Cookie and only after page load (in runtime) we add CSRF tokens and other important information.
Very foggy argument, what exactly is wrong about this one? Besides XORing we need a way to hide ANY secret tokens (api_key in my demo). How are you going to hide it? un XORing with javascript? Set-Cookie is a simplest solution, with only one weakness - it requires JS on. |
|
Any solution that assumes the user agent will run JavaScript as part of the security process cannot be general-purpose, since not all user-agents run JavaScript. Invoking the user agent's JS runtime also means there's another component we need to place our trust in. XOR-masking seems like a totally reasonable approach. It's easily understood and easy to implement, and does not rely on client-side behaviour in order to protect the server. It's the same technique used by WebSocket on all data to prevent the client from constructing arbitrary byte sequences. The argument that XOR is slow has not been tested, and we should not make arguments from performance when we have no numbers to talk about. The JS solution may seem to require less code, but it's most complex, since it invokes more parts of the stack in order to work. We could protect any token by providing two server-side functions, All I'm saying is we should not lean on JavaScript when a workable solution can be done entirely server-side. |
| def initialize(session, logger = nil) | ||
| session[:_csrf_token] ||= SecureRandom.base64(LENGTH) | ||
| @master_csrf_token = Base64.strict_decode64(session[:_csrf_token]) | ||
| @logger = logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be friendly to make logger an attr_accessor so other code/tests can get/set it post-initialize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless it's being used outside this class right now, I don't think that is needed, personally :)
|
One thing to note regarding bootstrapping: only three characters are necessary to get it going. Consider <meta content="csrf_token_which_is_base64_encoded=" name="csrf-token" />Note that if an attacker's query param is reflected in an attribute value anywhere in the page as in <input type="hidden" name="who_cares_but_not_the_token" value="asdfHACKERGUESS" />then the Also, I emphasize b64 encoding here, since that gives the attacker some idea of how the secret will end, and allows them to use the same idea to bootstrap from the end, instead of the beginning. |
| if @logger | ||
| @logger.warn "The client is using an unmasked CSRF token. This " + | ||
| "should only happen immediately after you upgrade to masked " + | ||
| "tokens; if this persists, something is wrong." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't actually matter, if an attacker has the raw token, they can generate a valid masked token, I think we can safely ignore it rather than log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, such a notification is very rare. only for those who opened page just before deployment
|
discussion above is pretty long but @NZKoz is right, w/o reflection it doesn't work! you need it to put guesses. |
|
One more comment: when we were doing our research, we definitely took a look at Rails to see how easy/difficult the attack would be there. It didn't make life easy for us (for essentially the reasons discussed above), and wouldn't have been a good candidate for a demo. That being said, as I think everyone here realizes: Rails is vulnerable in in principle. Anyway, the fact that @angeloprado, @ygluck, and I didn't find a good way to attack Rails shouldn't provide too much comfort. There are lots of people out there that are way more industrious and clever than we are. |
| "tokens; if this persists, something is wrong." | ||
| end | ||
|
|
||
| masked_token == @master_csrf_token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this vulnerable to a timing attack?
|
Hey @bradleybuda have you had time to react to the feedback from @NZKoz and @tarcieri? Seems like this is something important to get fixed... |
essentially copied from rails#11729.
|
For people waiting for this to get merged consider using https://github.com/meldium/breach-mitigation-rails in the meantime |
|
See #16570 for the simple masking merged from breach-mitigation-rails. This PR nicely encapsulates the authenticity token, would be welcome to rebase and continue with its abstraction. |
|
Closing due inactivity and since an alternative solution was already merged. |
The BREACH attack described at Black Hat this year allows an attacker to recover plaintext from SSL sessions if they have some idea what they're looking for. One high-value thing to steal that has a predictable plaintext format is the CSRF token (because it always appears in a meta tag and frequently in form tags as well).
The researchers who discovered the attack suggest mitigating it by "masking" secret tokens so they are different on each request. This implements their suggested masking approach from section 3.4 of the paper (PDF). The authenticity token is delivered as a 64-byte string, instead of a 32-byte string. The first 32 bytes are a one-time pad, and the second 32 are an XOR between the pad and the "real" CSRF token. The point is not to hide the token from the client, but to make sure it is different on every request so it's impossible for an attacker to recover by measuring compressability.
The code should be backwards-compatible with existing Rails installs; the format of
session[:_csrf_token]is unchanged, and unmasked tokens will still be accepted from clients (with a warning) so that you don't invalidate all your users' sessions on deploy. However, if users have overriddenActionController#verfied_request?, this may break them (depending on whether or not they're callingsuper).This is not a blanket fix for BREACH, just a way of protecting against one particular variant of attack. I am not a security expert; I've just implemented the fix as suggested in the paper. This should be reviewed by someone who knows what they're doing.