New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encryption severely broken for large databases #21

Open
rokclimb15 opened this Issue Aug 18, 2016 · 26 comments

Comments

Projects
None yet
8 participants
@rokclimb15

rokclimb15 commented Aug 18, 2016

Unfortunately there are a couple of serious problems with openssl smime that render the encryption option useless and dangerous when dumping a database over a few GB.

Regardless of input size, openssl smime will always produce a 1.9GB file on disk, which indicates that the input is truncated. This happens silently, which is very dangerous for a DB backup. Additionally, it is not possible to decrypt any smime message that large with OpenSSL due to internal limitations.

Reference https://rt.openssl.org/Ticket/Display.html?id=4651 for issues (guest/guest is login).

I recommend implementing gpg/gpg2 encryption with a passphrase file. gpg2 supports AES-NI with a new enough version of libgcrypt for AES acceleration. I can work on a patch if desired, but wanted to file this immediately so users with large DB exports can stop using encryption. It's very unsafe for backup as it cannot be restored.

@cytopia cytopia added the bug label Aug 18, 2016

@cytopia cytopia self-assigned this Aug 18, 2016

@cytopia

This comment has been minimized.

Owner

cytopia commented Aug 18, 2016

@rokclimb15 Thanks a lot for pointing this out.
I made it visible at the top of the Readme, so that everybody is aware about this.

Do you know the exact database size, when this problem will occur?

@rokclimb15

This comment has been minimized.

rokclimb15 commented Aug 18, 2016

The problem occurs between 1500 and 1600MB of input. That won't translate exactly to a database size, but anything over 1.5GB of data (minus indexes) is a candidate for this problem. Files that large can be encoded but not decoded.

The problem with input being clipped seems to happen at 1.9GB, but due to the previous limitation, that issue is irrelevant right now.

cytopia added a commit that referenced this issue Aug 18, 2016

@cytopia

This comment has been minimized.

Owner

cytopia commented Aug 18, 2016

I made a temporary adjustment, which will issue a warning to stderr in case of databases > 1200 Megabytes with enabled encryption.

@cytopia

This comment has been minimized.

Owner

cytopia commented Aug 18, 2016

Updated version also pushed to homebrew, webpage and announced on twitter.

cytopia added a commit that referenced this issue Aug 18, 2016

@jasperjorna

This comment has been minimized.

jasperjorna commented Aug 18, 2016

Thanks for the heads up, @rokclimb15! Much appreciated.

@cytopia

This comment has been minimized.

Owner

cytopia commented Aug 18, 2016

@rokclimb15 it seems that a more recent OpenSSL version with -stream (as pointed out by Dr Stephen N. Henson in the OpenSSL ticket) works as expected.

# Old OpenSSL Version
$ openssl version
OpenSSL 0.9.8zh 14 Jan 2016

$ openssl \
smime -encrypt -binary -text -aes256 -in sample.txt \
-out sample.txt.enc  -outform DER /etc/mysqldump-secure.pub.pem

# More recent OpenSSL Version
$ /usr/local/Cellar/openssl101/1.0.1t_1/bin/openssl version
OpenSSL 1.0.1t  3 May 2016

$ /usr/local/Cellar/openssl101/1.0.1t_1/bin/openssl \
smime -encrypt -stream -binary -text -aes256 -in sample.txt \
-out sample.txt.stream.enc  -outform DER /etc/mysqldump-secure.pub.pem

# with and without stream
$ ls -laph |grep sample
-rw-r--r--    1 cytopia 1286676289 3.0G Aug 18 23:43 sample.txt
-rw-r--r--    1 cytopia 1286676289 1.9G Aug 18 23:44 sample.txt.enc
-rw-r--r--    1 cytopia 1286676289 3.1G Aug 18 23:50 sample.txt.stream.enc

Can you verify this on your end?

@rokclimb15

This comment has been minimized.

rokclimb15 commented Aug 18, 2016

That does appear to fix the encryption problem. But now try to decrypt it ;)

It's unintentional ransomware if you ever lost your data and needed to restore a backup. The data size warning should probably remain until streaming decryption or bigger mem bufs are introduced in OpenSSL.

On Aug 18, 2016, at 5:54 PM, cytopia notifications@github.com wrote:

@rokclimb15 it seems that a more recent OpenSSL version with -stream (as pointed out by Dr Stephen N. Henson in the OpenSSL ticket) works as expected.

Old OpenSSL Version

$ openssl version
OpenSSL 0.9.8zh 14 Jan 2016

$ openssl
smime -encrypt -binary -text -aes256 -in sample.txt
-out sample.txt.enc -outform DER /etc/mysqldump-secure.pub.pem

More recent OpenSSL Version

$ /usr/local/Cellar/openssl101/1.0.1t_1/bin/openssl version
OpenSSL 1.0.1t 3 May 2016

$ /usr/local/Cellar/openssl101/1.0.1t_1/bin/openssl
smime -encrypt -stream -binary -text -aes256 -in sample.txt
-out sample.txt.stream.enc -outform DER /etc/mysqldump-secure.pub.pem

with and without stream

$ ls -laph |grep sample
-rw-r--r-- 1 cytopia 1286676289 3.0G Aug 18 23:43 sample.txt
-rw-r--r-- 1 cytopia 1286676289 1.9G Aug 18 23:44 sample.txt.enc
-rw-r--r-- 1 cytopia 1286676289 3.1G Aug 18 23:50 sample.txt.stream.enc
Can you verify this on your end?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@cytopia

This comment has been minimized.

Owner

cytopia commented Aug 18, 2016

I see, decryption throws this error:

Error reading S/MIME message
140735184199760:error:07069041:memory buffer routines:BUF_MEM_grow_clean:malloc failure:buffer.c:150:
140735184199760:error:0D06B041:asn1 encoding routines:ASN1_D2I_READ_BIO:malloc failure:a_d2i_fp.c:239:

Too bad.

@rokclimb15

This comment has been minimized.

rokclimb15 commented Aug 19, 2016

Here's what I came up with for workable encryption. Command should be same or similar for gpg. I used v2 because it uses libgcrypt with AES-NI if new enough. the passphrase file should be chmod 600 for security and contain the passphrase for encryption/decryption. This reads from STDIN and writes to STDOUT by default.

gpg2 --compress-algo none --cipher-algo AES256 --symmetric --batch --passphrase-file /root/mysql-backup-passphrase.txt

@cytopia

This comment has been minimized.

Owner

cytopia commented Aug 20, 2016

@rokclimb15 Thanks for this.
I still have to see how exactly I need to implement the encryption (symmetric or asymmetric)

@rokclimb15

This comment has been minimized.

rokclimb15 commented Aug 21, 2016

My two cents is to use symmetric. This is a backup tool, so in general the user will maintain control of the files throughout their lifecycle. Public/private key crypto probably isn't needed for that reason. Additionally, a careless user might not back up their private key and if the whole system was lost, they could very well remember their encryption passphrase. Most backup products use a passphrase to encrypt and decrypt.

@brownbrady

This comment has been minimized.

brownbrady commented Nov 7, 2016

Thank you very much for your work on this script. It is very useful to me. I am waiting for your solutions for the large file problem as 25% of my databases cannot be decrypted because they are very large. Thank you again.

@rokclimb15

This comment has been minimized.

rokclimb15 commented Nov 7, 2016

@brownbrady - just add a new encryption option like this. You'll need to install gpg2 and create a passphrase file. This works on very large files.

gpg2 --compress-algo none --cipher-algo AES256 --symmetric --batch --passphrase-file /root/mysql-backup-passphrase.txt

@brownbrady

This comment has been minimized.

brownbrady commented Nov 7, 2016

@rockclimb15: Thank you for your suggestion. By "encryption option", do you mean it is a variable in the mysqldump-secure.conf file? If so, what is the name of the variable?

@rokclimb15

This comment has been minimized.

rokclimb15 commented Nov 7, 2016

No, you would have to apply these steps to the unencrypted backup after it
is produced currently. I'm glad to work up a patch for mysqldump-secure if
cytopia agrees it's the direction he'd like to take.

On Sun, Nov 6, 2016 at 10:21 PM, brownbrady notifications@github.com
wrote:

@rockclimb15: Thank you for your suggestion. By "encryption option", do
you mean it is a variable in the mysqldump-secure.conf file? If so, what is
the name of the variable?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#21 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAjmZxLfnL5F86EYVgRsH_1frinZi7Tlks5q7pk5gaJpZM4JnCzG
.

@brownbrady

This comment has been minimized.

brownbrady commented Nov 7, 2016

@rockclimb15: I understand now. I will set ENCRYPT=0 then install gpg2 and create a passphrase file. Then I will need to run this after mysqldump-secure completes.
gpg2 --compress-algo none --cipher-algo AES256 --symmetric --batch --passphrase-file /root/mysql-backup-passphrase.txt
Thank you for your replies, and look forward to a patch if possible.

@cytopia

This comment has been minimized.

Owner

cytopia commented Nov 7, 2016

@brownbrady how large are those databases after compression?
Encryption is always done after compression, so it is the compressed size which matters, not the actual database size.

@brownbrady

This comment has been minimized.

brownbrady commented Nov 8, 2016

@cytopia: I just checked. Before compression, it was 6306.14 MB. After compression it was 954 MB according to ls -lh command. Here was the warning:

[WARN] (SQL): 32/528 Warning: Encryption is enabled and database size is > 1200 MB
[WARN] (SQL): 32/528 Warning: Verify that your backup can be decrypted.
[WARN] (SQL): 32/528 Warning: This warning can be disabled via 'ENABLE_SMIME_BUG_WARNING=0'
[WARN] (SQL): 32/528 Warning: Read here: https://github.com/cytopia/mysqldump-secure/issues/21
[INFO] (SQL): 32/528 Dumping: mydb (6306.14 MB) 318 sec (953.56 MB)

Does this mean the 'mydb' database above can be decrypted?

@cytopia

This comment has been minimized.

Owner

cytopia commented Nov 8, 2016

Yes, it can be encypted, if the final filesize does not exceed 1.9GB.

Looks like I've implemented the warning on the wrong size. It is checking the initial database size, instead of the size which will be on the disk (with or without compression/encryption).

If you come close to 1.9 GB you can choose a stronger compression algo.
If you still come close to 1.9 GB, please encrypt manually for now as suggested above.

@brownbrady

This comment has been minimized.

brownbrady commented Nov 16, 2016

@cytopia: I checked my databases and they are all under 1.9 GB. I will proceed with the encryption. Thank you for your script.

@cytopia cytopia added this to the v0.17 milestone Nov 16, 2016

@cytopia

This comment has been minimized.

Owner

cytopia commented May 26, 2017

Discussed on github openssl/openssl:

@globz

This comment has been minimized.

globz commented Sep 1, 2017

Hi,

Will this issue ever be resolved or the only solution will be to encrypt manually?

@Bobspadger

This comment has been minimized.

Bobspadger commented Oct 20, 2017

I'd like to know if it would be possible to use the GPG encryption via a configuration option ?

Its a great little utility, which is unfortunately being hamstrung by openssl :(

Keep up the good work :D

@Red-M

This comment has been minimized.

Red-M commented Mar 27, 2018

I've put a patch through to correct this behavior.

@globz

This comment has been minimized.

globz commented Mar 27, 2018

Thank you very much Red-M, I am slowly approaching the dreaded limit and I didn't want to look for an alternative, hopefully this patch will save us all!

@imreFitos

This comment has been minimized.

imreFitos commented Apr 2, 2018

For future visitors: I wrote a program that can decrypt large openssl smime encrypted files, it's at https://github.com/imreFitos/large_smime_decrypt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment