Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

systemd tries to mount automatically when LVM volumes in /etc/fstab activated after the boot. #6066

Closed
knoha-rh opened this issue Jun 1, 2017 · 9 comments

Comments

@knoha-rh
Copy link

knoha-rh commented Jun 1, 2017

Submission type

  • Bug report

systemd version the issue has been seen with

systemd-231-15.fc25

Used distribution

Fedora 25

In case of bug report: Expected behaviour you didn't see

systemd should not automatically mount LVM volumes in /etc/fstab when the LVM volumes are activated after system boot process completed.
This automatic mount feature will cause filesystem corruption in the worst case because user might not intend this behaviour.
systemd should mount all filesystem which listed at boot time according to the specified option. But systemd should not care the mount status after system boot process completed.

In case of bug report: Unexpected behaviour you saw

Currently, systemd will try to mount LVM volumes in /etc/fstab when the LVM volumes are activated after system boot process completed.

In case of bug report: Steps to reproduce the problem

  1. Create LVM volumes.

  2. Write /etc/fstab entries for the LVM volumes to mount them at boot time.

  3. Reboot the system.

  4. Confirm these volumes are mounted.

# mount
--- 8< --- <snip> --- 8< ---
/dev/mapper/vg03-lv02 on /mnt3b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg03-lv01 on /mnt3a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg03-lv03 on /mnt3c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv01 on /mnt2a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv02 on /mnt2b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv03 on /mnt2c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg01-lv01 on /mnt1a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg01-lv02 on /mnt1b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg01-lv03 on /mnt1c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
  1. Unmount the filesystem related the LVM volumes.
# umount /mnt1a /mnt1b /mnt1c
# mount
--- 8< --- <snip> --- 8< ---
/dev/mapper/vg03-lv02 on /mnt3b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg03-lv01 on /mnt3a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg03-lv03 on /mnt3c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv01 on /mnt2a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv02 on /mnt2b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv03 on /mnt2c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
  1. Deactivate the LVM volumes.
# vgchange -an vg01
device-enumerator: scan all dirs
  device-enumerator: scanning /sys/bus
  device-enumerator: scanning /sys/class
  0 logical volume(s) in volume group "vg01" now active

# mount -a
mount: special device /dev/mapper/vg01-lv01 does not exist
mount: special device /dev/mapper/vg01-lv02 does not exist
mount: special device /dev/mapper/vg01-lv03 does not exist
  1. Activate the LVM volumes.
# vgchange -ay vg01
device-enumerator: scan all dirs
  device-enumerator: scanning /sys/bus
  device-enumerator: scanning /sys/class
  3 logical volume(s) in volume group "vg01" now active
  1. Confirm the entries in /etc/fstab are mounted automatically.
# mount
--- 8< --- <snip> --- 8< ---
/dev/mapper/vg03-lv02 on /mnt3b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg03-lv01 on /mnt3a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg03-lv03 on /mnt3c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv01 on /mnt2a type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv02 on /mnt2b type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg02-lv03 on /mnt2c type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg01-lv02 on /mnt1b type xfs (rw,relatime,seclabel,attr2,inode64,noquota) <--- This is automatically mounted.
/dev/mapper/vg01-lv01 on /mnt1a type xfs (rw,relatime,seclabel,attr2,inode64,noquota) <--- This is automatically mounted.
/dev/mapper/vg01-lv03 on /mnt1c type xfs (rw,relatime,seclabel,attr2,inode64,noquota) <--- This is automatically mounted.
@zdzichu
Copy link
Contributor

zdzichu commented Jun 1, 2017

Are you using noauto keyword in fstab?

@knoha-rh
Copy link
Author

knoha-rh commented Jun 1, 2017

Hi,

The environment doesn't use noauto option in the lines.

# cat /etc/fstab 
/dev/mapper/fedora-root /                       xfs     defaults        0 0
UUID=46b35f82-0071-47b9-b260-3419bd523eb1 /boot                   ext4    defaults        1 2
/dev/mapper/fedora-swap swap                    swap    defaults        0 0
/dev/mapper/vg01-lv01 /mnt1a	xfs defaults 0 0
/dev/mapper/vg01-lv02 /mnt1b	xfs defaults 0 0
/dev/mapper/vg01-lv03 /mnt1c	xfs defaults 0 0
/dev/mapper/vg02-lv01 /mnt2a	xfs defaults 0 0
/dev/mapper/vg02-lv02 /mnt2b	xfs defaults 0 0
/dev/mapper/vg02-lv03 /mnt2c	xfs defaults 0 0
/dev/mapper/vg03-lv01 /mnt3a	xfs defaults 0 0
/dev/mapper/vg03-lv02 /mnt3b	xfs defaults 0 0
/dev/mapper/vg03-lv03 /mnt3c	xfs defaults 0 0

To make those volumes mounted at boot time, I don't want to use noauto option.

@poettering
Copy link
Member

If you don't want file systems to be mounted automatically, then use the "noauto" mount option. Otherwise, they will mounted as they show up.

Note that on today's systems there's no distinction between "startup" and "runtime" phase really, as file systems may take any time they want to show up, and systemd will act on that as necessary, and instantly in order to fulfill the configured configuration so that the systemd can be booted up entirely.

Hence, please just add "noauto" to these lines, and you should get the behaviour you want.

I hope that makes sense. Closing.

@knoha-rh
Copy link
Author

Hello,

@poettering Thank you for your comment. I'm afraid that you might miss something in my issue report.
The issue occurs after boot time.

For example,

  • /etc/fstab
/dev/mapper/vg00-lvol01 /mnt1a xfs defaults 0 0

Let us think above configuration.

Users expect that

  • The filesystem in /dev/mapper/vg00-lvol01 will be mounted at boot time.

So, there is no choice to use 'noauto' option because they expect to mount the filesystem at system boot.
On the other hand, if they deactivate vg00 for backup then activate vg00 after the backup, systemd will mount /dev/mapper/vg00-lvol01 automatically even though there is something the user would like to do in the opposite system which takes the backup of the disk.

In worst case, current systemd behavior will cause the data corruption due to mount the filesystem from different systems unintentionally.

In the past SysV behavior, the LVM volume wasn't mounted automatically after its activation in the use case above. Many system administrators doesn't have knowledge of the current systemd behavior because this behavior is not documented as difference from SysV.

@poettering
Copy link
Member

There is no distinction between boot time and other time any more on today's systems as devices can show up any time they want taking any time they want. There's simply no point in time anymore where "all devices have shown up", as devices might take an unbounded time to do so. On today's systems it's all about "having enough devices to fulfill the dependencies of something" before doing something.

sysv pretended there was a time where everything showed up, then mounted everything and then proceeded with booting. This is not compatible with today's hardware though, it just doesn't work that way.

@knoha-rh
Copy link
Author

Thank you for your detailed explanation. I understand that the current systemd behavior is expected.
Do you consider to write this behavior into proper systemd or sub components in systemd's man pages or documents in systemd? Many system administrators don't know this design and it is expected.

And I found that this behavior sometimes failed in the particular case. I'll file a new issue later.

@rbeldin
Copy link

rbeldin commented Jun 15, 2017

I'm curious about performing LVM-related activities in such a scenario. It seems that the admin needs to know to edit /etc/fstab and comment out those mounts that are undergoing maintenance, so that auto-generated systemd mount activities don't run. Is that right?

I think the paradigm of 'things that happen at boot time' is still quite strong in many environments. I also don't think that downstream relationships (mounting a fs because a VG was activated) are necessarily expected.

@arvidjaar
Copy link
Contributor

It seems that the admin needs to know to edit /etc/fstab and comment out those mounts that are undergoing maintenance, so that auto-generated systemd mount activities don't run. Is that right?

It is not enough. You also need to restart systemd. See also https://lists.freedesktop.org/archives/systemd-devel/2017-June/039054.html

@pboguslawski
Copy link

pboguslawski commented Nov 22, 2021

There is no distinction between boot time and other time any more on today's systems

Automounting filesystems during maintenance may be dangerous; even in "today's systems".

Isn't multi-user.target good point do distinguish between pre and post boot period?

Why not to add global systemd parameter to enable/disable automounting of /etc/fstab filesystems after multi-user.target is reached and/or mount option to allow to configure it individually for filesystems in /etc/fstab?

Workaround that works for us now: find unit name (i.e. mnt1a.mount) for FS (i.e. /mnt1a) that need maintenance and mask it

systemctl mask mnt1a.mount

and unmask it after maintenance

systemctl unmask mnt1a.mount

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

6 participants