Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backup compression algorithm #72

Closed
szaouam opened this issue Mar 22, 2017 · 10 comments
Closed

Backup compression algorithm #72

szaouam opened this issue Mar 22, 2017 · 10 comments

Comments

@szaouam
Copy link

szaouam commented Mar 22, 2017

Hello,

I am using the shield-boshrelease to backup the Cloud Foundry blobstore.

The release are using bzip2 to compress backups so it may take a long time (for huge amount of data).

My question is : what about to have a new feature that enable to change the compress algorithm (user can choose between tar,bzip2,..), so for example having it as a parameter in the bosh-release manifest.

Best regards,

@jhunt
Copy link
Contributor

jhunt commented May 3, 2017

I'd be curious to see the benchmarks on bzip2 taking "a long time" for various archive sizes

@szaouam
Copy link
Author

szaouam commented May 5, 2017

Hello,

I used the fs plugin to backup up the Cloud Foundry blobstore.

I backuped the /var/vcap/store/shared folder.

The size of the backuped folder is 105 GB.

Shield took 7h to perform the backup.

capture

The final size of the backup is 101,1 GB.

capture2

Best regards

@geofffranks
Copy link
Contributor

geofffranks commented May 5, 2017 via email

@szaouam
Copy link
Author

szaouam commented May 5, 2017

@geofffranks
I am using an internal s3 storage service.
I uploaded the same amount of data using an s3 client. It took about 1 hour.

@jhunt
Copy link
Contributor

jhunt commented May 12, 2017

Which S3 client?

@jhunt
Copy link
Contributor

jhunt commented May 12, 2017

Also, what is backing your s3 storage API? (assuming it's not on-prem AWS 😉)

@szaouam
Copy link
Author

szaouam commented May 12, 2017

@jhunt , i used s3cmd .

Right i am not using on-prem AWS :) . I am using an internal S3 storage (20 MB/S download speed).

@jhunt
Copy link
Contributor

jhunt commented May 12, 2017

Just to double-check, when you uploaded the same amount of data, were you standing on the same VM that was executing the s3 plugin in the slow-scenario?

@szaouam
Copy link
Author

szaouam commented May 12, 2017

@jhunt , yes

@jhunt
Copy link
Contributor

jhunt commented Nov 22, 2017

Reopening against the SHIELD project proper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants