Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SLES11.2 recover ERROR: No filesystem mounted on /mnt/local. Stopping. #166

Closed
MatthiasMerk opened this issue Oct 12, 2012 · 6 comments
Closed
Assignees

Comments

@MatthiasMerk
Copy link

Hi

I'm having trouble restoring a SLES11.2 installation with rear 1.14 / Git.
It's a KVM node with two disks vda and vdb and im trying to restore it on
another kvm node with only on disk vda. Thats why i'm excluding smtvg01.

The error message is: ERROR: No filesystem mounted on /mnt/local.
Stopping.

Custom configuration in /etc/rear:
local.conf:

OUTPUT=PXE
EXCLUDE_MOUNTPOINTS=(/srv/www /var/downloads)
EXCLUDE_VG=(smtvg01)

os.conf:

OS_VERSION=11

site.conf:

TIMESYNC=NTP

The node is configured with / on a ext3 partition and the rest of the
filesystem is on a VG.

Here's the "rear recover" output:

We will now restore the following filesystems:
/
/home
/opt
/opt/IBM/ITMAG
/opt/Tivoli/lcf
/tmp
/usr
/var
/var/tmp/sysb
Is this selection correct ? (Y|n) [30sec]
Comparing disks.
Disk configuration is identical, proceeding with restore.
Start system layout restoration.
Disk layout created.
ERROR: No filesystem mounted on /mnt/local. Stopping.
Aborting due to an error, check /var/log/rear/rear-lnx0100a.log for 
details
Terminated

The target disk didn't get partitioned:

RESCUE lnx0100a:/mnt/local # fdisk -l /dev/vda

Disk /dev/vda: 32.2 GB, 32212254720 bytes
16 heads, 63 sectors/track, 62415 cylinders, total 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/vda doesn't contain a valid partition table

Here's the originial disk layout:

Disk /dev/vda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders, total 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb66ddbb6

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *          63     1060289      530113+  83  Linux
/dev/vda2         1060290    62910539    30925125   8e  Linux LVM

And the original mounts:

$ mount
/dev/mapper/360060e80164c170000014c170000a219_part1 on / type ext3 
(rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/mapper/rootvg-home on /home type ext3 (rw)
/dev/mapper/rootvg-opt on /opt type ext3 (rw)
/dev/mapper/rootvg-itmag on /opt/IBM/ITMAG type ext3 (rw)
/dev/mapper/rootvg-tivoli on /opt/Tivoli/lcf type ext3 (rw)
/dev/mapper/rootvg-tmp on /tmp type ext3 (rw)
/dev/mapper/rootvg-usr on /usr type ext3 (rw)
/dev/mapper/rootvg-var on /var type ext3 (rw)
/dev/mapper/rootvg-sysb on /var/tmp/sysb type ext3 (rw)
/dev/mapper/smtvg01-smtlv01 on /srv/www type ext3 (rw,acl,user_xattr)
/dev/mapper/rootvg-lvdownload on /var/downloads type ext3 
(rw,acl,user_xattr)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /var/lib/ntp/proc type proc (ro,nosuid,nodev)

Any ideas?
Let me know if you need the logs of "rear mkrescue" and / or "rear -Dv
recover".

@MatthiasMerk
Copy link
Author

@jhoekx
Copy link
Contributor

jhoekx commented Oct 12, 2012

The reason ReaR fails is because you have multipath on top of your /dev/vda disk. Your rootvg is on top of that multipath.

AUTOEXCLUDE_MULTIPATH is 'y' by default, hence no filesystems saved or devices on top of it recreated.

Solutions:

  • set AUTOEXCLUDE_MULTIPATH= in your local.conf (ugly)
  • change your lvm configuration to only activate vd* devices (can break things)
  • change your multipath configuration to not create mapper targets on top of vd* devices (best)

@ghost ghost assigned jhoekx Oct 12, 2012
@MatthiasMerk
Copy link
Author

Thanks for your answer.

I was under the impression that we had multipath generally disabled on our KVMs. So i did some investigating and found out that it's disabled on the SLES11.1 VMs and enabled on the SLES11.2s.

Whats strange is that boot.multipath and multipathd init scripts are disabled but the dm_multipath module is loaded.

I tried regenerating the initrd and tricking it into thinking that the VM isn't multipathed but to no avail.

Any ideas how i could disable multipathing?

The disks are virtio pci device

lspci |grep SCSI
00:09.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0a.0 SCSI storage controller: Red Hat, Inc Virtio block device

@jhoekx
Copy link
Contributor

jhoekx commented Oct 15, 2012

Maybe adding /dev/vd* to the blacklist in multipath.conf and regenerating your initrd will work?

@MatthiasMerk
Copy link
Author

i tried adding devnode "^vd[a-z][[0-9]*]" to the blacklist stanza in /etc/multipath.conf and blacklisted dm_multipath and dm_round_robin in /etc/modprobe.d/blacklist and regenerated the initrd to no avail.

i think the multipath.conf is not being used since the multipath daemon isn't running.

Maybe i should open a SR with SuSE

@MatthiasMerk
Copy link
Author

ok i got it to work by extracting the current initrd and removing everything multipath related. rebooted with the new initrd. now dm_multipath and dm_round_robin were gone and mkinitrd created a initrd without multipath feature.

the VM is restoring fine right now :)

thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants