Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data can be written on an unmounted filesystem #493

Closed
HaleTom opened this issue Aug 11, 2017 · 5 comments
Closed

Data can be written on an unmounted filesystem #493

HaleTom opened this issue Aug 11, 2017 · 5 comments

Comments

@HaleTom
Copy link

HaleTom commented Aug 11, 2017

On my system:

$ uname -a
Linux svelte 4.9.39-1-MANJARO #1 SMP PREEMPT Fri Jul 21 08:25:24 UTC 2017 x86_64 GNU/Linux
$ umount --version
umount from util-linux 2.30 (libmount 2.30.0: btrfs, assert, debug)

I have been observing some strange behaviour:

  • My removable disk device changing from /dev/sdb to /dev/sdc and sometimes sdd
  • Kernel messages for /dev/sdb when only /dev/sdc exists in the filesystem
  • Random reboots
  • btrfs corruption

I've tracked down what I believe to be the cause: I have a script to unmount everything under /media with umount -l so I can then disconnect my removable media.

Here is an example of:

  • umount -l <filesystem>
  • The filesystem and device being "locked" and not unmounted
  • Data being written to a supposedly unmounted device
  • This still occurs even with umount --force --all-targets /dev/<device>
  • The device name changing when the device is removed and re-inserted (as the previous device is still "locked"/ in use)

Below, /dev/sdb is a USB HDD with two partitions combined via btrfs magic into a single filesystem. (There is an ext4 example of writing to an unmounted device after this one)

$ lsblk -f /dev/sdb
NAME           FSTYPE      LABEL                     UUID                                   MOUNTPOINT
sdb
├─sdb1         ntfs        Seagate Backup Plus Drive 01D3117F9DB28EC0
├─sdb2         btrfs       svelte-backup             74bccde8-c47e-47b1-a713-4150f2ceda00
└─sdb3         LVM2_member                           MtLb3p-MUle-8fyk-fy6m-z99n-V9mi-4j5DYd
└─2TB-backup   btrfs       svelte-backup             74bccde8-c47e-47b1-a713-4150f2ceda00

Write data on an unmounted filesystem

Mount the 'svelte-backup' filesystem:

$ sudo mount /dev/sdb2 /media/backup

Keep a "lock" on the filesystem and device via current directory:

$ cd /media/backup/

File SURPRISE doesn't exist:

$ ls SURPRISE
ls: cannot access 'SURPRISE': No such file or directory

Unmount the filesystem --lazy:

$ sudo umount -l /media/backup

Then really unmount it:

$ sudo umount --force --all-targets /dev/sdb2
umount: /dev/sdb2: not mounted

Write SURPRISE to an unmounted filesystem and device:

$ touch SURPRISE
$ sync

Physically disconnect and reconnect USB HDD

$ lsblk -f /dev/sdb
lsblk: /dev/sdb: not a block device

Now /dev/sdb is somehow "locked" by the working directory.

$ pwd
/media/backup

The reinserted device lists as /dev/sdc:

$ lsblk -f /dev/sdc
NAME           FSTYPE      LABEL                     UUID                                   MOUNTPOINT
sdc
├─sdc1         ntfs        Seagate Backup Plus Drive 01D3117F9DB28EC0
├─sdc2         btrfs       svelte-backup             74bccde8-c47e-47b1-a713-4150f2ceda00
└─sdc3         LVM2_member                           MtLb3p-MUle-8fyk-fy6m-z99n-V9mi-4j5DYd
└─2TB-backup   btrfs       svelte-backup             74bccde8-c47e-47b1-a713-4150f2ceda00

Remove current directory "lock":

$ cd /media

Mount the newly named device:

$ sudo mount /dev/sdc2 /media/backup
$ ls -l /media/backup/SURPRISE
-rw-r--r-- 1 ravi ravi 0 Aug 11 11:36 /media/backup/SURPRISE

SURPRISE file was written to an unmounted filesytem.


Reproducible example, using loopback mounted ext4 filesystems

Create a new mountpoint, filesystem and mount it:

$ cd /tmp
$ truncate -s 200M ext4-loop
$ mkfs.ext4 -q ext4-loop
$ losetup -f --show ext4-loop
/dev/loop0
$ mkdir mountpoint
sudo mount /dev/loop0 mountpoint/

Now change to the directory to get the weird behaviour:

$ cd mountpoint/

Unmount the filesystem while inside its mountpoint:

$ sudo umount /tmp/mountpoint
umount: /tmp/mountpoint: target is busy.
$ sudo umount -l /tmp/mountpoint
$ sudo umount -f /dev/loop0
umount: /dev/loop0: not mounted.

So now, I am told that the device is unmounted.

Write a new file to where the filesystem was previously mounted. I would expect that this would either error (as the reference to the current directory is invalid), or write the file to the directory which the filesystem was hiding while it was mounted.

$ pwd
/tmp/mountpoint
$ sudo touch SURPRISE
$ mount|grep mountpoint
$

Hmm, no error... so where is our surprise hiding?

$ cd ..
$ ls -la mountpoint
total 0
drwxr-xr-x  2 ravi ravi  40 Aug 10 15:48 .
drwxrwxrwt 19 root root 840 Aug 10 16:00 ..

And it wasn't written to the underlying mountpoint directory.

Could it have been written to the device which I was told was unmounted?

Prove no caching is at play: Remove the loopback device, copy the filesystem file:

$ losetup -d /dev/loop0
$ cp ext4-loop should-be-virgin-loop

Mount the copied filesystem (which should be empty):

$ losetup -f --show should-be-virgin-loop
/dev/loop0
$ sudo mount /dev/loop0 mountpoint/
$ ls -la mountpoint
total 13,312
drwxr-xr-x  3 root root  1,024 Aug 10 15:52 .
drwxrwxrwt 19 root root    840 Aug 10 16:00 ..
drwx------  2 root root 12,288 Aug 10 15:46 lost+found
-rw-r--r--  1 root root      0 Aug 10 15:52 SURPRISE
$

SURPRISE the file was written to an umounted filesystem.


Conundrums

  1. How do I know when a filesystem unmounted with umount -l has actually been unmounted?
  2. How can I force the unmount of the device after an umount -l?
@rudimeier
Copy link
Contributor

rudimeier commented Aug 11, 2017 via email

@HaleTom
Copy link
Author

HaleTom commented Aug 11, 2017

I want to be able to use --lazy so as to avoid in-file corruption that could occur with --force or mount -o remount,ro.

Why do you say umount -l shouldn't be used at all? Shouldn't this warning be in the man page?

The lsof solution only works before umount -l:

$ lsof | grep $$ | grep mountpoint
bash       5607       ravi  cwd       DIR                7,0      1024          2 /tmp/mountpoint
$ sudo umount -l /tmp/mountpoint
$ lsof | grep $$ | grep mountpoint
$

It seems to quite dangerous that there is: (I hope I'm wrong!)

  • No way of actually unmounting a umount -l filesystem after the fact
  • No way of knowing if it has been unmounted or not

So the user is left in limbo-land.

I wonder, is losetup -d exits 0 when it doesn't succeed #484 related in any way?

@karelzak
Copy link
Collaborator

The filesystem is invisible for another processes after after umount --lazy, it does not makes sense to use tools like lsof.

I don't think standard "umount" produces file corruption. The umount(2) return -EBUSY in this case and filesystem is not mounted. You have to remove all processes (and so on) to umount filesystem.

The --force is designed for NFS, I have doubts any another filesystem uses this.

This is really how --lazy is designed, I don't see any bug here. Sorry.

@HaleTom
Copy link
Author

HaleTom commented Aug 12, 2017

So is there a kernel mechanism to know when the device has eventually been unounted and is safe to remove?

A running btrfs balance may not be the easiest thing to kill...

@HaleTom
Copy link
Author

HaleTom commented Feb 23, 2019

The answer to my question immediately above is that it's not easy.

@karelzak Thanks for the grace with which you handled communication on this non-issue :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants