Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decryption result is not checked before running decompression #101

Closed
obohrer opened this issue Sep 11, 2017 · 5 comments
Closed

Decryption result is not checked before running decompression #101

obohrer opened this issue Sep 11, 2017 · 5 comments

Comments

@obohrer
Copy link

obohrer commented Sep 11, 2017

Hi,

Thank you for creating nippy. It is a great library which works quite well for us apart from a recent issue we encountered :

We are experiencing an issue with nippy when trying to thaw a byte-array which has been encrypted with a different encryption key.

The 5th byte of the decrypted payload contains the len-decomp but if the wrong encryption key was used this length can be huge, resulting in a java.lang.OutOfMemoryError: Java heap space. We expect nippy to throw an exception but not blow up the JVM.

Do you think a new version of nippy could check the decryption result before trying to decompress it ? (For example by verifying a magic series of bytes after the NPY header ?)

@ptaoussanis
Copy link
Member

Hi Olivier,

What compressor are you using- Snappy or LZ4? Reason I'm asking: this was a known issue with Snappy, and one of the motivations for switching from Snappy to LZ4 as the default Nippy compressor.

@obohrer
Copy link
Author

obohrer commented Sep 11, 2017

Hi Peter, yeah we are using LZ4 like this for encryption/compression :

(nippy/freeze {:compressor taoensso.nippy.compression/lz4-compressor
                     :password   [:cached password]})

So it seems there is the same issue with LZ4. I think this issue might not be resolved just by changing the compression but by checking that the decryption was successful.

@obohrer
Copy link
Author

obohrer commented Oct 16, 2017

Any news on this issue ?
How easy do you think such change would be ?

If I understand correctly we could add a new encryptor-id like :aes128-sha512-withcheck or something similar which would encrypt a magic byte/bytes alongside the actual payload and use it to verify the decryption result ?

@ptaoussanis
Copy link
Member

Hi Olivier,

Thanks a lot for bringing this to my intention, this is something that should be fixed. Haven't had an opportunity to think about this yet. Some kind of hash/checksum would certainly be one option.

Haven't had much time for open source recently, but will make this my next priority when I have some free time 👍.

@ptaoussanis ptaoussanis self-assigned this Dec 17, 2017
ptaoussanis added a commit that referenced this issue Jan 6, 2019
Why?

  - AES-GCM is faster and can be more secure, Ref. https://goo.gl/Dsc9mL, etc.
  - AES-GCM is an authenticated[1] encryption mechanism, providing
    automatic integrity checks. This is relevant to [#101].

What's the issue with #101?

  - We    compress then encrypt    on freeze ; Reverse would make compression useless
  - So we decrypt  then decompress on thaw

Attempting CBC decryption with the wrong password will often but not
*always* throw. Meaning it's possible for decompression could be
attempted with a junk ba. And this can cause some decompressors to
fail in a destructive way, including large allocations (DDoS) or even
taking down the JVM in extreme cases.

Possible solutions?

  - We could add our own HMAC, etc.
  - And/or we could use something like AES-GCM which offers built-in
    integrity and will throw an AEADBadTagException on failure.

There may indeed be reasons [2,3,4] to consider adding a custom HMAC -
and that's still on the cards for later.

But in the meantime, the overall balance of pros/cons seems to lean
in the direction of choosing AES-GCM as a reasonable default.

Note that the change in this commit is done in a backward-compatible
way using Nippy's versioned header: new payloads will be written using
AES-GCM by default. But old payloads already written using AES-CBC will
continue to be read using that scheme.

References
  [1] https://en.wikipedia.org/wiki/Authenticated_encryption
  [2] https://www.daemonology.net/blog/2009-06-24-encrypt-then-mac.html
  [3] https://blog.cryptographyengineering.com/2011/12/04/matt-green-smackdown-watch-are-aead/
  [4] HMAC vs AEAD integrity,           https://crypto.stackexchange.com/q/24379
  [5] AES-GCM vs HMAC-SHA256 integrity, https://crypto.stackexchange.com/q/30627
@ptaoussanis
Copy link
Member

Hi @obohrer, I've just pushed [com.taoensso/nippy "2.15.0-alpha9"] to Clojars which switches Nippy's default encryption scheme from AES-CBC to AES-GCM.

As you'll see in the relevant commit, this should help address the problem you're seeing.

I'll note: the integrity/tag check will only apply to newly written/frozen data.

Hope that helps?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants