-
-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue while backingup ~1TB Volume #11
Comments
The utility works in my context and probably for other people as well from what I am hearing. Can you provide more details:
Can you also try to run in on a smaller test volume first to see if it works as expected? I don't know about Docker volumes size limitations - feel free to ask them or do some tests. |
@loomchild thanks for your reply. Please find the details below.
|
Hi @rakeshnambiar, thanks for detailed answers. I have a couple more questions / suggestions:
|
I was able to succesfully backup and restore 13GB volume with randomly generated data (sorry, I don't have larger disk at hand to try a bigger one). I tested both using files and stdin/stdout. Please note that output archive is compressed so it will be smaller than original volume (although 0.5% compression ratio seems a bit optimistic). Also make sure that you use the latest version of the utility as I recently improved the error handling mechanism. For the moment one hypothesis is that there's some issue with redirection to stdout on your machine. Could you decompress the archive manually using tar or is it damaged? |
@loomchild I am sure that currently the docker is configured with AUFS driver, however I would like to change it to May I ask your storage driver name, please? |
I think you are confusing Docker storage driver with volume driver. The first one, such as AUFS or Device mapper, is responsible for managing root container filesystem, whereas the latter controls the mounted volumes. AUFS stores all previous versions of the files so it'd be very inefficient for storing frequently changing data. I am also using AUFS as storage driver and could you execute |
@loomchild details are below. Please note, as such I haven't loaded ~800 GB data since it's lost during the last restore attempt. But probably I can load ~100GB and retry.
|
@rakeshnambiar my vote is with @loomchild ... there could be something goofy going on with stdin/out ... did you manage to manually decompress your archive ok? |
Yeah, so it's a local driver for your volume, not AUFS. BTW, could you confirm that minio_data is mounted under /data? Please also retry and let me know. Make sure the data is in /data directory. Sorry for the lost data, I hope it wasn't critical and it wasn't the only copy. I suggest to regularly test restore procedures as a general best practice - just having a backup is never sufficient. |
@rakeshnambiar I have another remark. I see that the size of your main computer filesystem has size of 1TB and it's the same one as mounted in /data. Are you storing both the containers and volume on the same disk/filesystem? Perhaps it's possible that you simply run out of disk space while doing a backup (as I don't see sufficient space to store extra 1TB by looking at your |
@loomchild it was the case in the beginning. But later on I added 3TB overall space and mounted & mapped to a single virtual folder as stated below. Also the /data folder is correct and it's looks like below
|
Sorry, I don't see your 3TB in the results of Let me know if you can still reproduce the issue otherwise I will close the ticket. |
BTW, are you sure that |
We are using docker-compose. Now the volumes are unmounted as I faced this issue 3-4 weeks back and you cannot see the |
OK, so I am closing the issue. Feel free to re-open or create a new one if you are able to provide more details and reproduce this particular issue. |
@loomchild
I had a try to take the docker persistent volume back up with the utility
loomchild/volume-backup
but seems it did not work. I had 800GB data but both backup & restore was pretty quick (~4 mins). When I deleted the actual data from the hard disk I understood the utility didn't work as expected.It's working fine with the small volumes. Is this due to the default block size limit which's 10GB? Does docker persistent volume automatically create the new volume in case the volume size exceeds ~10GB?
The text was updated successfully, but these errors were encountered: