New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mirror raid often is not available #63

Open
m821926 opened this Issue Mar 12, 2018 · 15 comments

Comments

Projects
None yet
6 participants
@m821926

m821926 commented Mar 12, 2018

Hello,

My Mirror raid often is not available and to correct it I need to run this command on an ssh command line
mdadm –assemble –scan

Result:
root@raspberrypi:~# mdadm –assemble –scan
mdadm: /dev/md0 has been started with 2 drives.

My Raid volume is than back online but this should not be the normal behaviour.

@subzero79

This comment has been minimized.

Contributor

subzero79 commented Mar 14, 2018

You should not be running a raid in a rpi, not even in another system with usb. This is not Omv issue.

@SimonMcN

This comment has been minimized.

SimonMcN commented Mar 21, 2018

https://superuser.com/questions/287462/how-can-i-make-mdadm-auto-assemble-raid-after-each-boot

I had this problem on my Raspberry Pi 2 running Raspbian GNU/Linux 8 (jessie). I had a RAID array on /dev/sda1 and /dev/sdb1 which failed to assemble at boot. I had in my /etc/mdadm/mdadm.conf file the entry

ARRAY /dev/md/0 metadata=1.2 UUID=53454954:4044eb66:9169d1ed:40905643 name=raspberrypi:0

(your numbers will be different; see other answers on how to get this.)

I had in my /etc/fstab file the entry

/dev/md0 /data ext4 defaults 0 0

(and of course /data indeed existed)

Like the OP, I could assemble and mount the RAID array by hand after boot, but I could not get it to happen automatically during boot despite apparently correctly setting it up.

I was able to solve the problem as follows. I investigated the script at /etc/init.d/mdadm-raid and inserted a line of debug code

ls /dev > /home/pi/devices.txt

Rebooting and checking this file I learned that devices /dev/sda and /dev/sdb existed at the time the mdadm-raid initialization happened, but the partitions /dev/sda1 and /dev/sdb1 were missing. I edited the /etc/init.d/mdadm-raid file and inserted the line

partprobe

after the header (i.e. after the ### END INIT INFO but before the script begins). This caused the partitions to be detected and so the mdadm-raid script was able to assemble the RAID array, resolving the problem. Hope this helps someone!

@ryecoaaron

This comment has been minimized.

Contributor

ryecoaaron commented Mar 21, 2018

OMV doesn't use partitions for raid arrays it creates. So, this solution probably wouldn't fix most people's raid issues.

@votdev

This comment has been minimized.

Collaborator

votdev commented Mar 21, 2018

This is definitively not a problem of the OMV code.

@meequz

This comment has been minimized.

meequz commented Mar 25, 2018

You should not be running a raid in a rpi, not even in another system with usb

@subzero79 where can I read more about that?
RAID is an important feature for NAS, and while OMV has images for RPi, I'm expecting it definitely supports RAIDs.

@subzero79

This comment has been minimized.

Contributor

subzero79 commented Mar 25, 2018

It supports raid because omv package is architecture agnostic. It doesn't remove UI elements based on the platform.

You should consider using raid in a rpi only as a proof of concept or playing around. The most common reality is what you describe in your issue, reboot and raid is gone.

You can read all over the forum people with problems using RPI. Yes, there are images for the platform just because users demand it and are there, but if it where for me i would remove them.

ATM there is around 4-5 different affordable ARM boards that beat by 10x times what the rpi can do as a NAS server, some of them even have SATA ports now. I personally own a rock64 board, performs very well but i would never consider to run a RAID there.

Is good media player (i give you that at least for x264 media), just a terrible choice for what you really want.

@meequz

This comment has been minimized.

meequz commented Mar 26, 2018

@subzero79 but why it's like that? Why there are RAID problems with USB systems?

@ryecoaaron

This comment has been minimized.

Contributor

ryecoaaron commented Mar 26, 2018

raid needs full control over the drives and usb doesn't give that control needed. Plus, usb is not as reliable but was never intended to be.

@subzero79

This comment has been minimized.

Contributor

subzero79 commented Mar 27, 2018

@m821926 follow the advice given by @SimonMcN if it works for you. He is modifying a default debian init script. OMV won't do that for you.

Not much we can do for here. OMV provides a panel for creating MD raids, removing disks from there etc. It also configures /etc/mdadm/mdadm.conf, notifications, and does not go beyond that, is expected to just boot the raid assembled and ready to be mounted by fstab. There is no magic here.

Could be the drives are not ready (due to slow usb bridge) when mdadm is scanning or the fact you created the raid using partitions.

The 2 reasons to to have raid are: availability and performance, you clearly don't have availability (as you mention it rarely works on reboot) and we should not discuss about performance in this case. Also a reminder this RAID is not your backup.

You might want to think your setup maybe consider using as two separate disks (filesystems) an mirror them using a daily rsync job

@SimonMcN

This comment has been minimized.

SimonMcN commented Mar 27, 2018

Would it be worth adding
mdadm –assemble –scan
To init.rc ? Would that give the USB devices time to start ?
Alternatively you could perhaps add a root cron job ?

@SimonMcN

This comment has been minimized.

SimonMcN commented Mar 27, 2018

As an aside, I agree with the other guys that usb is not a great performance interface to run mdadm over however I'm not sure I'd agree that usb is not fit for purpose within a raid environment. Usb adds an additional level of complexity as there is the usb to data interface on both sides of the link. So the throughput will be slower, the links will take longer to raise and lower and the latency will be higher. This in conjunction with the lower processing power of the pi would make it a poor usage scenario, however it should work albeit with everything working a LOT slower, including the initial initialisation and detection of the drives.
This is something that the rasppi OMV build could cater for, but it would definitely be above and beyond the call of duty, and only as a favour by the build creator rather than a bug. Imo.

@SimonMcN

This comment has been minimized.

SimonMcN commented Mar 27, 2018

As a minimum you should also be supplying a copy of dmesg and mdadm --detail prior to the manual assemble to aid troubleshooting.

@ryecoaaron

This comment has been minimized.

Contributor

ryecoaaron commented Mar 27, 2018

Debian/OMV already tries to assemble mdadm raid arrays at boot. Waiting for USB devices is the job of the bios or something else. rc and cron is the wrong idea since services depend on storage which would not be ready.

After all the bad results I have seen with raid and usb and the RPi, I would never go down that road. If you really want to use and rpi and want redundancy, connecting raid to the RPi the wrong way to go in my opinion. You should have multiple RPis with one drive attached to each. Then rsync between them. This is the only way you will ever get halfway reliable redundancy from RPis (and still not very good).

@SimonMcN

This comment has been minimized.

SimonMcN commented Mar 27, 2018

"Having the Raspberry Pi 3, adding the rootdelay=5 to the /boot/cmdline.txt solved this problem”

The credit goes here.

https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=153578

@ryecoaaron

This comment has been minimized.

Contributor

ryecoaaron commented Mar 27, 2018

Search the OMV forum... You will see this doesn't help everyone. If I remember, people even tried 20 seconds without luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment