Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File system suddenly "missing" after update #143

Open
Xedon420 opened this Issue Jul 21, 2018 · 27 comments

Comments

Projects
None yet
3 participants
@Xedon420
Copy link

Xedon420 commented Jul 21, 2018

Hi,

I updated to the latest OMV version a few days ago and since then I have a big problem..... (here a screenshot of the error: http://prntscr.com/k9a45r)
Here is the sysinfo: http://prntscr.com/k9a4iv

At the beginning I didn't notice the error because everything went on just like before, only when I logged into the panel again yesterday and wanted to assign a folder to a user did I notice the error. All files are undamaged, installed services like plex, smb etc continue to work on the file system. Afterwards I made an ad bug responsible, so I restarted once but unfortunately the same error again... File system is still intact and can be used without problems so I conclude there is some error in the panel.

I would be happy about help ;)

fstab
info

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 21, 2018

The filesystem is not mounted, so please press the „Mount“ button.

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 21, 2018

I can't press it :(
the button is grayed out

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 23, 2018

WHat is the state of the RAID containing the filesystem?

# cat /proc/mdstat
@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 23, 2018

mdstat

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 23, 2018

What happens if you mount the filesystem manually via CLI?

P.S.: You can add images here. Images using external services are not available anymore some day, thus this bugreport becomes useless. The best is to copy/paste the raw text.

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 23, 2018

The Filesystems are mounted... Samba, plex can use it seems for my only as a display bug..

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 23, 2018

What is the output of blkid -o full?

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 23, 2018

root@NAS:~# blkid -o full
/dev/sda: UUID="2174d5ce-5e03-4234-f6c4-80edd024cf9a" UUID_SUB="3915b4a3-ba3e-8692-c622-c18ff7491675" LABEL="NAS:Storage" TYPE="linux_raid_member"
/dev/sdb: UUID="2174d5ce-5e03-4234-f6c4-80edd024cf9a" UUID_SUB="9f477511-f301-8b02-df97-579e25528d56" LABEL="NAS:Storage" TYPE="linux_raid_member"
/dev/sde1: UUID="3ed73ad8-279b-43da-b7cd-bdb26c89244a" TYPE="ext4" PARTUUID="b1fc64cd-01"
/dev/sde5: UUID="c07f6696-a3ac-4188-ac9e-4bc0f113b1e1" TYPE="swap" PARTUUID="b1fc64cd-05"
@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 23, 2018

There is the problem, you have two file systems with the same label. Remove one of the label to fix this issue.

@votdev votdev closed this Jul 23, 2018

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 23, 2018

I haven't added another label and I can't accept a loss of data... I can't explain it otherwise (I can't delete from a file system at the moment). Is there another solution than the webpanel?

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 23, 2018

what if you can do this without losing data? How?

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

I tried to reproduce this issue with OMV4, but wasn't able to see your problems. Everything works. What OMV version are you using?

Please also post the output of mount.

@votdev votdev reopened this Jul 24, 2018

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 24, 2018

I don't understand...
All services can use the hard disk as always only the OMV web panel suddenly has an error after an update... I mean, if a second label is responsible for this, this second label must have been created somehow.

OMV Version: 4.1.8.2-1 Arrakis is installed

Here the output of the command mount...

root@NAS:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=1977220k,nr_inodes=494305,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=405060k,mode=755)
/dev/sde1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=40,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=10349)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/md0 on /srv/dev-disk-by-id-md-name-NAS-Storage type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
tmpfs on /run/user/997 type tmpfs (rw,nosuid,nodev,relatime,size=405056k,mode=700,uid=997,gid=996)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1005 type tmpfs (rw,nosuid,nodev,relatime,size=405056k,mode=700,uid=1005,gid=1001)

@votdev votdev added the 4.x label Jul 24, 2018

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

What is the output of omv-confdbadm read --prettify "conf.system.filesystem.mountpoint"?

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 24, 2018

root@NAS:~# omv-confdbadm read --prettify "conf.system.filesystem.mountpoint"
[
    {
        "dir": "/srv/dev-disk-by-id-md-name-NAS-Storage",
        "freq": 0,
        "fsname": "/dev/disk/by-id/md-name-NAS:Storage",
        "hidden": false,
        "opts": "defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",
        "passno": 2,
        "type": "ext4",
        "uuid": "c3aa31c9-53b6-4da7-915b-4cab8550a63d"
    }
]
@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

Still no idea what is going wrong. What is the output of udevadm info --query=property --name=/dev/md0 and readlink -f /dev/disk/by-id/md-name-NAS:Storage?

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

I think i've found the problem, but i don't know the reason for this. On your system there is no blkid entry for the /dev/md0 device like on my system:

root@omv4box:/home/vagrant# blkid
/dev/vda1: UUID="199a4bbc-59c9-4a3b-b592-950eaffb2530" TYPE="ext4" PARTUUID="1aac37af-01"
/dev/sdb: UUID="f376f1dd-2fb1-e1a7-5b66-eac09348e857" UUID_SUB="93522a42-7d44-3de6-a46d-8388acfc094c" LABEL="omv4box:Storage" TYPE="linux_raid_member"
/dev/md127: UUID="00015783-6d7e-41b3-a58d-f3b5292f3c1b" TYPE="ext4"
/dev/sdc: UUID="f376f1dd-2fb1-e1a7-5b66-eac09348e857" UUID_SUB="f7136555-6410-4616-490e-653e1bb114f7" LABEL="omv4box:Storage" TYPE="linux_raid_member"
/dev/sda1: PARTUUID="5d35e9e5-a1aa-4714-8e26-c09cdefc59b1"

Did you create this MD RAID on OMV4? I think there was a problem with RAID devices that were created with an older Debian system. In this case the only solution was to setup a OMV(2|3) system, backup the data, install OMV4, recreate the RAID and copy back the backup data. Think the problem was with the RAID metadata, but i'm no expert in this case.

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 24, 2018

hmm ok...
An idea how to fix this?

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

I'm sorry, no. This problem is far out of the scope of OMV. Seems to be a problem with the kernel <-> md <-> filesystem.

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

You did not answer how this MD RAID was created. A solution is to reinstall the Debian/OMV version where the filesystem on your MD device is accessible to create a backup of your data. Then setup a complete system with OMV4 to workaround this Debian <-> kernel <-> md module <-> filesystem issue.

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 24, 2018

hmm ok...

I cannot perform a new installation at the moment. (I have no more backup disks free)
Is there a workaround how I can set the users without the panel etc?

But it's a bit strange because it really worked until the last update without problems and now suddenly it doesn't work anymore... By the way, I use a pure OMV image here.

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 24, 2018

But it's a bit strange because it really worked until the last update without problems

From which version have you updated? Please keep in mind that most packages coming via updates are from the Debian project (only a small number of packages are delivered by the OMV project, mostly thus called openmediavault-xxxx.deb). Maybe you've installed a new kernel package which introduces the problem. In my opinion this is 99% the reason, more exactly something in the md submodule or the ext filesystem module.

@Xedon420

This comment has been minimized.

Copy link
Author

Xedon420 commented Jul 25, 2018

ok means, as soon as a new kernel update comes up, it could be that the error resolves itself again or?

@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Jul 26, 2018

This could be, but i wouldn't bet on it.

You could also try to install some older kernels. Set OMV_APT_USE_KERNEL_BACKPORTS=no in /etc/default/openmediavault and then

# omv-mkconf apt
# apt-get update

And now comes the tricky part to force the installation of the original Debian 9 shipped kernel package. Maybe someone can help here out or you have to ask the question in the forum because i can not help here due the fact that i didn't have done that myself ever.

@brando56894

This comment has been minimized.

Copy link

brando56894 commented Dec 26, 2018

I have the same issue. I created my RAID10 array on Arch Linux and imported it in OMV, it did show up but during the process of updating and installing things, it disappeared.

Here's all the info you requested above: https://hastebin.com/tesaviniku.makefile

 [bran@nas ~]$ uname -a
Linux nas 4.18.0-0.bpo.3-amd64 #1 SMP Debian 4.18.20-2~bpo9+1 (2018-12-08) x86_64 GNU/Linux

OMV 4.1.16-2

[bran@nas ~]$ sudo blkid -o full
/dev/sda1: UUID="2b9a7fd3-9af3-413d-903d-9dd38a03171a" TYPE="ext4" PARTUUID="ec3d4e1e-01"
/dev/sda5: UUID="564b42a2-42ab-48c4-a255-377a32d047d1" TYPE="swap" PARTUUID="ec3d4e1e-05"
/dev/sdb1: UUID="f36f4c88-0b5c-8826-f897-d5d20d6aa130" UUID_SUB="e57c1d62-8ef6-1ce6-5b7d-015137ebe51a" LABEL="archiso:0" TYPE="linux_raid_member" PARTUUID="c47e814a-1bd9-374a-a73c-83b5305edff3"
/dev/sdd1: UUID="f36f4c88-0b5c-8826-f897-d5d20d6aa130" UUID_SUB="44cc679c-efc9-de13-862c-ea2abdcffc34" LABEL="archiso:0" TYPE="linux_raid_member" PARTUUID="d4f9507b-41b0-554d-a5cc-34bf8a949d7d"
/dev/sdc1: UUID="f36f4c88-0b5c-8826-f897-d5d20d6aa130" UUID_SUB="6f27aa87-3fc1-508f-b07c-024fc56e2cac" LABEL="archiso:0" TYPE="linux_raid_member" PARTUUID="309a1a6a-a32c-5949-9369-f0fae24d7501"
/dev/sde1: UUID="f36f4c88-0b5c-8826-f897-d5d20d6aa130" UUID_SUB="068497e7-4512-cb2e-8334-534a170fc14a" LABEL="archiso:0" TYPE="linux_raid_member" PARTUUID="78cfdebd-9653-d845-81be-860929ef78b0"
/dev/md0: UUID="30977e3c-cf88-4228-bb5f-6f29359172fe" TYPE="ext4"

 [bran@nas ~]$ omv-confdbadm read --prettify "conf.system.filesystem.mountpoint"
[
    {
        "dir": "/srv/dev-disk-by-id-md-name-archiso-0",
        "freq": 0,
        "fsname": "/dev/disk/by-id/md-name-archiso:0",
        "hidden": false,
        "opts": "defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",
        "passno": 2,
        "type": "ext4",
        "uuid": "b8931e2c-e76d-48a7-83ec-88eab0faebd4"
    }
]


 [bran@nas ~]$ udevadm info --query=property --name=/dev/md0
DEVLINKS=/dev/disk/by-id/md-uuid-f36f4c88:0b5c8826:f897d5d2:0d6aa130 /dev/disk/by-uuid/30977e3c-cf88-4228-bb5f-6f29359172fe /dev/disk/by-id/md-name-archiso:0
DEVNAME=/dev/md0
DEVPATH=/devices/virtual/block/md0
DEVTYPE=disk
ID_FS_TYPE=ext4
ID_FS_USAGE=filesystem
ID_FS_UUID=30977e3c-cf88-4228-bb5f-6f29359172fe
ID_FS_UUID_ENC=30977e3c-cf88-4228-bb5f-6f29359172fe
ID_FS_VERSION=1.0
MAJOR=9
MD_DEVICES=4
MD_DEVICE_sdb1_DEV=/dev/sdb1
MD_DEVICE_sdb1_ROLE=0
MD_DEVICE_sdc1_DEV=/dev/sdc1
MD_DEVICE_sdc1_ROLE=3
MD_DEVICE_sdd1_DEV=/dev/sdd1
MD_DEVICE_sdd1_ROLE=1
MD_DEVICE_sde1_DEV=/dev/sde1
MD_DEVICE_sde1_ROLE=2
MD_LEVEL=raid10
MD_METADATA=1.2
MD_NAME=archiso:0
MD_UUID=f36f4c88:0b5c8826:f897d5d2:0d6aa130
MINOR=0
SUBSYSTEM=block
SYSTEMD_WANTS=mdmonitor.service
TAGS=:systemd:
USEC_INITIALIZED=4325016


 [bran@nas ~]$ readlink -f /dev/disk/by-id/md-name-archiso\:0 
/dev/md0
@votdev

This comment has been minimized.

Copy link
Collaborator

votdev commented Dec 26, 2018

@brando56894 Thanks for the report. I think this is a bug in the kernel or userland, maybe something related to the MD metadata (i'm no expert in this case). Sadly i have to say this is out of the scope of the OMV project.

@brando56894

This comment has been minimized.

Copy link

brando56894 commented Dec 27, 2018

Interestingly enough....it's back now! I noticed while doing the above that my array wasn't mounted at /mnt/storage like it usually is, I so changed the mount point, but that screwed up a lot of things OMV related since it was expecting it to be mounted under /srv, so I mounted the array at both places, rebooted...and now it exists about 4 hours after a reboot...but it wasn't there initially after the reboot. Very strange.

I'm going to reinstall OMV since I bought an NVMe drive, and if I have time I'll recreate the array in OMV this time. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.