Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

subtract buffer size from computed rekey limit to avoid exceeding it #19

Closed
wants to merge 1 commit into from

Conversation

aadamowski
Copy link

Hi!

This pull request changes the way in which the rekey limit is computed based on cipher block size to address a problem with OpenSSH going over the intended limit.

But first, a short background story:

In 2013, Red Hat has introduced a patch for OpenSSL that adds some additional checks to its GCM implementation:

https://lists.fedoraproject.org/pipermail/scm-commits/Week-of-Mon-20131111/1144834.html

These checks are based on recommendations from NIST SP 800-38D:

http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf

Among those, section 5.2.1.1 imposes a limit on plaintext length that amounts to 64 GiB.

At Facebook, this was causing our scp transfers larger than 64 GiB to die with a cipher_crypt: EVP_Cipher failed error.

The check implementing this limit has been recently rolled back by Red Hat:

https://rhn.redhat.com/errata/RHBA-2015-0772.html

The reason for dropping it is stated in the package's ChangeLog:

  • Thu Mar 26 2015 Tomáš Mráz tmraz@redhat.com 1.0.1e-30.8
  • drop the AES-GCM restriction of 2^32 operations because the IV is
    always 96 bits (32 bit fixed field + 64 bit invocation field)

According to our own analysis, the change does not remove an operations count restriction (specified in Sec 8.3 of NIST SP 800-38D and dependent on usage of a non-recommended IV configuration), but total plaintext length restriction (specified in Sec 5.2.1.1, which is unconditional).

Regardless of validity of the removed check, it has exposed what we believe to be a bug in OpenSSH in the way that rekey limits (based on data, instead of time) are handled.

Currently, if the rekey limit is not explicitly configured, it's computed algorithmically based on the cipher's block size:

if (enc->block_size >= 16)

For a 128-bit block cipher like AES-GCM, this amounts to a limit of exactly 64GiB - the same as the recommended by NIST.

However, since the check for exceeding the rekey limit (max_blocks_* fields in the session state) is only performed in clientloop and serverloop after processing a buffered batch of packets, the amount of data encrypted/decrypted will almost always go above the limit for a few blocks (depending on how much of them were in the buffer) before rekeying is triggered.

In our case, this was causing AES-GCM to go above the 64 GiB limit shortly before triggering rekeying and abort with an error, unless a sufficiently lower RekeyLimit is explicitly set (which itself can only be set to values less than 4GiB because of u32int being used, but that's a different story).

Our proposed fix is to deduce the maximum theoretical amount of buffered blocks from the computed max_blocks value.

@aadamowski
Copy link
Author

CC @sweis

@daztucker
Copy link
Contributor

Fixed upstream via https://bugzilla.mindrot.org/show_bug.cgi?id=2521. Please reopen the bugzilla bug if there are further issues.

@daztucker daztucker closed this Aug 2, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants