-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DecryptAndDecompressTar: lz4 decompress failed. Is archive encrypted?: DecompressLz4: lz4 write failed: unexpected EOF #849
Comments
Hi! |
Hi Hmm ... It's strange. I tried manually upload/download archives (like part_001.tar.lz4 or similar) to/from S3 storage. Everything works perfectly. The connection is stable and the speed is good enough, but the restoration from backup is failing with error messages:
Also, I tried to use different S3 storages (different servers), but the restoration from backup is almost failing. Maybe something wrong could be:
Now I'm using wal-e. I think to start use wal-g, but those issues stop my idea to do it. How to solve this problem? What could you recommend? |
Decompression occurs on the fly right after receiving bytes from the network. No extra libs are necessary. |
I get the same error. I'm using Backblaze's S3 API and the first two parts are always failing to decompress. |
Might be a duplicate of #449 that contains an interesting finding #449 (comment) I also have the same problem running
I'm using Digital Ocean droplet with attached volume (https://docs.digitalocean.com/products/volumes/) On the Volumes product page they mention that they are using Ceph. Could it be that Ceph is a source of this problem? Disk write speed is different, almost x2 times difference for me:
I tried Maybe tcpdump will be helpful: when it fails to restore ( Looks like connection is closed by Object Storage? Also interesting finding: Normal disk (I have to mention, that part_001.tar.br is only 9.9 Mb):
Ceph disk
I have a feeling that it might be something about Stream which is taken all the way from s3 client to decompressor, and how buffers work in Ceph and how decompressed files are written to disk. Wondering if saving to temporary file instead of stream processing right away will change the situation..? |
A bit more information: If I generate some load (writing to disk) with
Then backup-fetch will fail with rather long time waiting to get part_001.tar.br
Without write heavy operation I can still restore to /mnt/volume_lon1_01, but even couple apps writing log file to that disk will affect the process... But if I do the same on another disk (not an attached block storage device), then everything works:
But at the same time I can use https://github.com/s3tools/s3cmd and download the file without any issues. @x4m I'm wondering if that can make any difference?
Maybe "under" load the buffer size is changed and amount of bytes fetched is not enough to decompress contents? What if we download contents to a tempFile first (https://golang.org/pkg/io/ioutil/#example_TempFile) and then try decompress/extract? |
I did a quick test and simply dumped Lines 102 to 103 in 0261341
Files are there and were created without any problems (no EOF in 10 tries). |
Hi,
My DB server:
CentOS 7.7
PostgreSQL 12.5
I'm using WAL-G v0.2.19 (Latest). Backups and WALs go to S3 storage.
It's interesting that during restoration processes I'm getting some error messages:
Sometimes restore is OK, but sometimes it takes very long and finally crashed.
I create backups in a regular way with backup-push (no encryption or other special elements).
The creation of backups and uploads to S3 storage is always normal and successful. Problems occur only with the restore process.
What does it mean "DecryptAndDecompressTar: lz4 decompress failed. Is archive encrypted?: DecompressLz4: lz4 write failed: unexpected EOF"?
Why does it happen?
The text was updated successfully, but these errors were encountered: