Anyone had much experience with mdadm raid1 arrays?
My OpenMediaVault NAS has (had) a RAID 1 array setup with mdadm
Everything was fine.
After a power blip (due to our Autumnal weather?), the RAID array has become 2 separate RAID 1 arrays, each in degraded mode because the “other” disk is missing.
Wow.
So, because both drives are “physically” fine (according to SMART), I’m not quite sure where to start…
For whatever reason, the RAID array was /dev/md127… but now I have /dev/md127 and /dev/md126 – I’m presuming that /dev/md127 is the “real” one and that I should be looking to remove the drive associated with /dev/md126, and moving it
(back) in to /dev/md127?
Because both are degraded, they are both mounted read-only
I’ve mounted the md126 to a temporary mount point and I’ve got a “diff –qr” running to check the contents of both drives match, but I’m presuming that I’m going to have to consider one drive as “foreign” and it’ll be re-written to by
mdadm when it’s joined to the real array again??
OpenMediaVault doesn’t really have much capability for repairing drives – the webgui is pretty much intended for creating new arrays out of new blank drives, so I’m planning to just dive in to mdadm on the command line and recover the
array that way
Any pointers?
And... just to make this more interesting… I’ll be doing this remotely, from out of the country! :o)
FYI – the NAS has other physical drives in it (just some old drives, no RAID, less capacity), so the critical data will be transferred to those drives – but I’d like to use this opportunity to see how well I can recover from this situation
in case it was a remote customer, etc…
Google throws up some useful pointers, but I trust this forum’s advice better than some random blogs written in 1834 referring to files / applications which don’t exist on this debian squeeze based system.
And yes… a UPS will be ordered up ASAP…
Thanks!
Steve