Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upbtrfs multiple devices confusion: automatically unmounted /home, clobbered ssh session #14674
Comments
|
Looks nearly the same as #14454. I'll answer the relevant questions from that issue with info for this setup.
Attaching /run/systemd/generator/home.mount |
|
It's ok I don't need any help. When I write, they can respond, but they
can also report their analysis in my language and grammar. I repeat the
same expression, and I will not get to know and detect me to tell. The
statements I have sought from each channel have been the same in the past,
but no one wants to pay attention
Chris Murphy <notifications@github.com> 於 2020年1月27日 週一 10:38 寫道:
… *systemd version the issue has been seen with*
systemd-244.1-2.fc32.x86_64
*Used distribution*
Fedora Rawhide
*Expected behaviour you didn't see*
/home should remain mounted, user login stays logged in
*Unexpected behaviour you saw*
/home is automatically unmount, user is logged out
*Steps to reproduce the problem*
1. Minimal installation, layout looks like this:
# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
vda
├─vda1 vfat FAT32 ACE0-7EB6 590.3M 1% /boot/efi
├─vda2 ext4 1.0 cdb6c92a-a461-43e4-b6f9-57e865b32f0c 812M 10% /boot
├─vda3 swap 1 0d13000a-c258-41f9-a964-8960321f59fa [SWAP]
└─vda4 btrfs fedora b2e7ba8f-70cb-4286-89d0-21b8a1f9af0a 26.2G 3% /home
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda4 28G 804M 27G 3% /
/dev/vda4 28G 804M 27G 3% /home
/dev/vda2 976M 97M 813M 11% /boot
/dev/vda1 599M 8.5M 591M 2% /boot/efi
1. live migration to a new configuration could be a replacement drive,
or in this case to a ramdisk so that the original device can be
repartitioned, without rebooting:
# umount /boot/efi
# umount /boot
# swapoff /dev/vda3
# modprobe zram
# zramctl -f -a lz4 -s 1536M
# btrfs dev add /dev/zram0 /
# btrfs dev rem /dev/vda4 /
At this point everything is working OK, no complaints.
1. But the instant I run fdisk systemd kicks me out of the ssh user
session and unmounts /home, preventing login. This is what's recorded in
the journal at the moment I run fdisk, attached journal shows what happens
after that.
[ 830.014371] localhost.localdomain kernel: vda: vda1 vda2 vda3 vda4
I'm not sure what the logic/trigger is for the user session dying and
/home being unmounted. It's almost like reboot/shutdown behavior. I'm not
sure it's a bug. But I'm also sure it's not the behavior I expect, and some
kind of command to (temporarily) inhibit this behavior for this use case is
not exactly discoverable: e.g. some kind of systemd maintenance mode to
indicate that it should suspend automatically behaviors that can result in
being unable to easily access the system. Once I'm kicked out, I can't log
back in because /home is unmounted. So now I need direct access to the VM
or server.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#14674?email_source=notifications&email_token=AN7SUZLQTC5H3Y2OSEAZ4YLQ7ZCI7A5CNFSM4KL24TPKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IIZV2KQ>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AN7SUZK3UQUR4ETVCM75RB3Q7ZCI7ANCNFSM4KL24TPA>
.
|
|
I can trivially reproduce it on openSUSE Tumbleweed with kernel 5.4.10 and systemd 244. When you call fdisk, kernel sends remove/add events for partitions:
As filesystems from This demonstrates fundamental design problem in systemd - filesystem instance is associated with single device, this association is static and generated once on boot and never changes. This assumption does not work for multi-device filesystems like btrfs or zfs. Here underlying devices can be changed online at any time, completely replaced and original devices repurposed. systemd is not prepared to deal with it. |
which most likely is result of ìoctl(...,BLKRRPART,...) |
|
We don#t generate BindsTo= from .mount to .device anymore these days. Except that for you it was apparently created? Can you check the comments on #14454 regarding that, i.e. we need to figure out where BindsTo= comes from? |
|
From the
|
|
Possible goose chase... Booting with rd.udev.log_priority=debug, I see:
|
Sorry? Line 359 in 7d20404 |
|
hmm, true we actually do, for those configured in /etc/fstab, but not for the others... So fdisk these days is actually capable of not removing all partitions in the kernel, but operate incrementally, so that the devices never disappear. |
|
The next steps in the use case, whether fdisk, gdisk (or variants), or parted, is to resize its partitions, and write out the new GPT (or MBR). Presumably the kernel refreshes this devices partition map, since nothing is actively pinning it. But if it doesn't refresh, the user will either The analog to this on LVM is to use pvmove. |
Huh? And if user removes partitions intentionally? |
Wrong, you do add |
systemd version the issue has been seen with
Used distribution
Expected behaviour you didn't see
Unexpected behaviour you saw
Steps to reproduce the problem
At this point everything is working OK, no complaints.
fdisksystemd kicks me out of the ssh user session and unmounts /home, preventing login. This is what's recorded in the journal at the moment I run fdisk, attached journal shows what happens after that.I'm not sure what the logic/trigger is for the user session dying and /home being unmounted. It's almost like reboot/shutdown behavior. I'm not sure it's a bug, but I'm sure it's not expected. Once I'm kicked out, I can't log back in because /home is unmounted. So now I need direct access to the VM or server.
bug14674_journalhomekill.txt