Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Root file system is 100% full after recovery #377

Closed
gbeckett opened this issue Mar 8, 2014 · 4 comments
Closed

Root file system is 100% full after recovery #377

gbeckett opened this issue Mar 8, 2014 · 4 comments

Comments

@gbeckett
Copy link

gbeckett commented Mar 8, 2014

I'm using Rear 1.15 on a RHEL 5.8 server. It has internal disks for the /boot and vg00 LVM volume group and SAN storage from an EMC VNX array and we are using Multipath for the 6 SAN storage devices.
Using the following local.conf variables I have been able to successfully backup the server.
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://#.#.#.#/admin/rear/"
NETFS_KEEP_OLD_BACKUP_COPY=2

When I attempt to recover the server it complains with the following error for all the devices.


No code has been generated to restore device pv:/dev/mpath/mpath3p1 (lvmdev).
Please add code to the /var/lib/rear/layout diskrestore.sh to manually install it or choose abort.


I found issue #228 and edited the disklayout.sh file and changed the mpath to mapper as directed for each of my multipath deivces. But when I run the recover command I get the same errors.
Now I've tried this a number of times, both with editing disklayout.sh and changing those lines and "not" changing them, but I get the same result.
That being, the server is recovered, I can boot it, the LVM volumes are created, my multipath devices are there, BUT the root volume group vg00 is 100% FULL. I can increase the size of root with a lvresize and a fsadm resize and all works. It was initially 4GB and had to increase it to 5GB.
Then next time I attempted the recover, root was again 100% FULL so I had to increase the size from 5GB to 6GB. I cannot find where the extra data is being stored. I thought it was the inodes, but a df -i / indicates that only 1% of the inodes are being used. I've scanned the root file system and have compared it against another server identically built (but has not been Rear recovered) but I cannot find the problem.
Can anyone make some suggests with respect to the mpath errors and the root file system being 100% full?
Thank you very much.
Gary

@gdha gdha added the support label Mar 10, 2014
@gdha gdha self-assigned this Mar 10, 2014
@gdha
Copy link
Member

gdha commented Mar 10, 2014

Did you change lvm.conf file? Was the multipath.conf file modified? Can we see your local.conf file and the files under /var/lib/rear/layout/?
Could it be that the recover process restores data to / instead of a SAN disk file system?

@gbeckett
Copy link
Author

Hi sorry for the delay.
No, I have not modified the "lvm.conf" file. Though I read through the RH Multipath doc indicated I'm still not sure what the correct set of parameters/switches to use to filter.

My local.conf file has the following.
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://10.67.138.34/admin/rear1/"
NETFS_KEEP_OLD_BACKUP_COPY=2

The files in the /var/lib/rear/layout are fairly verbose. Is there a way to up load the files rather then dumping the contents in this log?

As for you last question, yes I guess its possible that something is being recovered into the root file system, but like I said, I search through it a cannot seem to find where or what it would be.

Thanks for your help. Its very much appreciated.
Please advise.

@gdha
Copy link
Member

gdha commented Apr 1, 2014

@gbeckett you can drop them in gist.github.com and reference those in this issue

@gdha
Copy link
Member

gdha commented Jun 3, 2015

@gbeckett I guess you give up with rear? SAN disks are normally not touched during recovery, but as I never saw any evidence from your side I cannot say much more about it.
I'll close this issue - if you have some new input you may re-open this request

@gdha gdha closed this as completed Jun 3, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants