Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue while backingup ~1TB Volume #11

Closed
rakeshnambiar opened this issue Mar 5, 2019 · 15 comments
Closed

Issue while backingup ~1TB Volume #11

rakeshnambiar opened this issue Mar 5, 2019 · 15 comments

Comments

@rakeshnambiar
Copy link

rakeshnambiar commented Mar 5, 2019

@loomchild

I had a try to take the docker persistent volume back up with the utility loomchild/volume-backup but seems it did not work. I had 800GB data but both backup & restore was pretty quick (~4 mins). When I deleted the actual data from the hard disk I understood the utility didn't work as expected.

It's working fine with the small volumes. Is this due to the default block size limit which's 10GB? Does docker persistent volume automatically create the new volume in case the volume size exceeds ~10GB?

@loomchild
Copy link
Owner

The utility works in my context and probably for other people as well from what I am hearing.

Can you provide more details:

  • each command you have executed,
  • names of volumes,
  • error / info messages,
  • contents of the volum (anything specific?)
  • contents of generated archives (what do they contain, since generation took 4min I suppose there's something),
  • contents of restored files (which part of the files was restotred?),
  • information whether the volume is stored locally or on a network drive via specific Docker driver
  • if volume is stored locally then what filesystem / operating system do you use
  • etc.

Can you also try to run in on a smaller test volume first to see if it works as expected?

I don't know about Docker volumes size limitations - feel free to ask them or do some tests.

@rakeshnambiar
Copy link
Author

@loomchild thanks for your reply. Please find the details below.

  • each command you have executed
    I used Backup to standard output option:
    docker run -v minio_data:/volume --rm loomchild/volume-backup backup - >minio_data_archive.tar.bz2

    Restore command
    cat minio_data_archive.tar.bz2 | docker run -i -v minio_data:/volume --rm loomchild/volume-backup restore -

  • names of volumes
    Volume section in the docker-compose file looks like below

      volumes:
    	   postgres-volume:
    	   minio_data: {}
      	   minio_config: {}
    	   ftpdata:
    
  • error / info messages
    No error or info message is generated

  • contents of the volum (anything specific?)
    Images, PDF, DOC, DOCX and tar.gzip files

  • contents of generated archives (what do they contain, since generation took 4min I suppose there's something)
    ~6GB of minio_data volume is restore whereas it contains around ~800GB data

  • contents of restored files (which part of the files was restotred?)
    It seems to me that ~10GB data might have backed up and lost the rest of the data. I think, as the below forum says maybe there's a maximum volume size contraints which is ~10GB
    https://forums.docker.com/t/increase-container-volume-disk-size/1652/3
    I am using AUFS storage driver

  • information whether the volume is stored locally or on a network drive via specific Docker driver
    It's a mounted volume locally available. In fact 3 disk of 1TB is mounted and merged to a single folder called virtual as below.
    https://romanrm.net/mhddfs

  • if volume is stored locally then what filesystem / operating system do you use
    ext4 and ext3

  • etc.
    Out of 4 volumes, the issue is facing only for the minio_data volume. Which means this utility is working fine when the file size is lesser than ~10GB.
    If the volume size is limited 10GB is true, i don't think it's the problem with this utility and I may need to seek an option to increase the limit to ~1TB using Device Mapper storage driver

@loomchild
Copy link
Owner

Hi @rakeshnambiar, thanks for detailed answers.

I have a couple more questions / suggestions:

  1. Could you try to backup without using stdin/stdout? Also could you confirm that the output archive doesn't contain the files (to confirm that the problem occurs during backup and not during restore).
  2. I am not aware of this 10G limitation, but yeah, please ask about it. From the discussion thread you provided I have an impression they are discussing container storage, and not mounted volume (despite the title)
  3. Are you sure you are using AUFS driver - I thought it's used only for container data stored outside of volumes. I suppose you are using the default local driver. Could you confirm?
  4. Interesting there are several volumes are merged into one - perhaps this is causing some problems? Could you try with a simpler setup and perhaps smaller volume (100GB for example).

@loomchild
Copy link
Owner

I was able to succesfully backup and restore 13GB volume with randomly generated data (sorry, I don't have larger disk at hand to try a bigger one). I tested both using files and stdin/stdout.

Please note that output archive is compressed so it will be smaller than original volume (although 0.5% compression ratio seems a bit optimistic).

Also make sure that you use the latest version of the utility as I recently improved the error handling mechanism.

For the moment one hypothesis is that there's some issue with redirection to stdout on your machine. Could you decompress the archive manually using tar or is it damaged?

@rakeshnambiar
Copy link
Author

@loomchild I am sure that currently the docker is configured with AUFS driver, however I would like to change it to Device Mapper storage driver and give a try. Also I can confirm that I am using the latest version of utility.

May I ask your storage driver name, please?

@rakeshnambiar rakeshnambiar changed the title Issue while backingup ~TB Volume Issue while backingup ~1TB Volume Mar 10, 2019
@loomchild
Copy link
Owner

loomchild commented Mar 11, 2019

I think you are confusing Docker storage driver with volume driver. The first one, such as AUFS or Device mapper, is responsible for managing root container filesystem, whereas the latter controls the mounted volumes. AUFS stores all previous versions of the files so it'd be very inefficient for storing frequently changing data. I am also using AUFS as storage driver and local as volume driver.

could you execute docker volume inspect minio_data and post the output?
Also could you enter the container and execute df -h command and post the result?

@rakeshnambiar
Copy link
Author

rakeshnambiar commented Mar 11, 2019

@loomchild details are below. Please note, as such I haven't loaded ~800 GB data since it's lost during the last restore attempt. But probably I can load ~100GB and retry.

Volume inspect output:
-----------------------------------------------------------------------
[
    {
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "myproject",
            "com.docker.compose.volume": "minio_data"
        },
        "Mountpoint": "/mnt/virtual/volumes/minio_data/_data",
        "Name": "minio_data",
        "Options": {},
        "Scope": "local"
    }
]

# df -h
------------------------------------------------------------------------
Filesystem                Size      Used Available Use% Mounted on
none                   1007.8G     13.4G    943.2G   1% /
tmpfs                     1.9G         0      1.9G   0% /dev
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/sdb               1007.8G     13.4G    943.2G   1% /data
/dev/sdb               1007.8G     13.4G    943.2G   1% /root/.minio
/dev/sdb               1007.8G     13.4G    943.2G   1% /etc/resolv.conf
/dev/sdb               1007.8G     13.4G    943.2G   1% /etc/hostname
/dev/sdb               1007.8G     13.4G    943.2G   1% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     1.9G         0      1.9G   0% /proc/kcore
tmpfs                     1.9G         0      1.9G   0% /proc/timer_list
tmpfs                     1.9G         0      1.9G   0% /proc/timer_stats
tmpfs                     1.9G         0      1.9G   0% /proc/sched_debug
tmpfs                     1.9G         0      1.9G   0% /sys/firmware


Docker info
-------------------------------------------------------------------------

Storage Driver: aufs
 Root Dir: /mnt/virtual/aufs
 Backing Filesystem: extfs


Plugins:
 Volume: local
 Network: bridge host macvlan null overlay

@diversemix
Copy link

@rakeshnambiar my vote is with @loomchild ... there could be something goofy going on with stdin/out ... did you manage to manually decompress your archive ok?

@loomchild
Copy link
Owner

Yeah, so it's a local driver for your volume, not AUFS. BTW, could you confirm that minio_data is mounted under /data? Please also retry and let me know. Make sure the data is in /data directory.

Sorry for the lost data, I hope it wasn't critical and it wasn't the only copy. I suggest to regularly test restore procedures as a general best practice - just having a backup is never sufficient.

@loomchild
Copy link
Owner

@rakeshnambiar I have another remark. I see that the size of your main computer filesystem has size of 1TB and it's the same one as mounted in /data. Are you storing both the containers and volume on the same disk/filesystem? Perhaps it's possible that you simply run out of disk space while doing a backup (as I don't see sufficient space to store extra 1TB by looking at your df -h result)?

@rakeshnambiar
Copy link
Author

@loomchild it was the case in the beginning. But later on I added 3TB overall space and mounted & mapped to a single virtual folder as stated below.

https://romanrm.net/mhddfs

Also the /data folder is correct and it's looks like below

"Mountpoint": "/mnt/virtual/volumes/minio_data/_data"

@loomchild
Copy link
Owner

loomchild commented Mar 20, 2019

Sorry, I don't see your 3TB in the results of df -h therefore I am a bit confused, but you are right it looks like mounted. What's strange is that it seems to use the same FS as container root, but perhaps it's normal if it corresponds to your host root or containers are stored on mounted volume.

Let me know if you can still reproduce the issue otherwise I will close the ticket.

@loomchild
Copy link
Owner

loomchild commented Mar 20, 2019

BTW, are you sure that minio_data is mounted under /data in container using --volume option? How do/did you start the container?

@rakeshnambiar
Copy link
Author

We are using docker-compose. Now the volumes are unmounted as I faced this issue 3-4 weeks back and you cannot see the /mnt/virtual. Since this utitlity didn't help us much, we are planning volume snapshot backup.

@loomchild
Copy link
Owner

OK, so I am closing the issue. Feel free to re-open or create a new one if you are able to provide more details and reproduce this particular issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants