我遇到了一个服务器(Ubuntu 10.04)中的RAIDarrays的问题。
我有一个4磁盘raid5arrays – SD [cdef],创build像这样:
# partition disks parted /dev/sdc mklabel gpt parted /dev/sdc mkpart primary ext2 1 2000GB parted /dev/sdc set 1 raid on # create array mdadm --create -v --level=raid5 /dev/md2 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
这几个月来一直运行良好。
我只是应用系统更新并重新启动,raid5 – /dev/md2 – 没有重新启动。 当我用mdadm --assemble --scan重新组装它时,似乎只有3个成员驱动器 – sdf1丢失了。 这是我能find的:
(注意:md0和md1是raid-1,分别安装在两个驱动器上,分别为/和swap)。
root@dwight:~# mdadm --query --detail /dev/md2 /dev/md2: Version : 00.90 Creation Time : Sun Feb 20 23:52:28 2011 Raid Level : raid5 Array Size : 5860540224 (5589.05 GiB 6001.19 GB) Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Apr 8 22:10:38 2011 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 1bb282b6:fe549071:3bf6c10c:6278edbc (local to host dwight) Events : 0.140 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 2 8 65 2 active sync /dev/sde1 3 0 0 3 removed
(是的,服务器叫德怀特,我是一个办公室的粉丝:))
所以它认为一个驱动器(真正的分区)缺失,/ dev / sdf1。
root@dwight:~# mdadm --detail --scan ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=c7dbadaa:7762dbf7:beb6b904:6d3aed07 ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=1784e912:d84242db:3bf6c10c:6278edbc mdadm: md device /dev/md/d2 does not appear to be active. ARRAY /dev/md2 level=raid5 num-devices=4 metadata=00.90 UUID=1bb282b6:fe549071:3bf6c10c:6278edbc
什么,什么,/ dev / md / d2? 什么是/ dev / md / d2? 我没有创造。
root@dwight:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid5 sdc1[0] sde1[2] sdd1[1] 5860540224 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_] md_d2 : inactive sdf1[3](S) 1953513408 blocks md1 : active raid1 sdb2[1] sda2[0] 18657728 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda1[0] 469725120 blocks [2/2] [UU] unused devices: <none>
同上。 md_d2? sd [cde] 1在md2中正确,但sdf1丢失(似乎认为它应该是它自己的数组?)
root@dwight:~# mdadm -v --examine /dev/sdf1 /dev/sdf1: Magic : a92b4efc Version : 00.90.00 UUID : 1bb282b6:fe549071:3bf6c10c:6278edbc (local to host dwight) Creation Time : Sun Feb 20 23:52:28 2011 Raid Level : raid5 Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB) Array Size : 5860540224 (5589.05 GiB 6001.19 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 2 Update Time : Fri Apr 8 21:40:42 2011 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 71136469 - correct Events : 114 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 81 3 active sync /dev/sdf1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 2 2 8 65 2 active sync /dev/sde1 3 3 8 81 3 active sync /dev/sdf1
…所以sdf1认为它是md2设备的一部分,是吗?
当我在/ dev / sdc1上运行时,我得到:
root@dwight:~# mdadm -v --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 1bb282b6:fe549071:3bf6c10c:6278edbc (local to host dwight) Creation Time : Sun Feb 20 23:52:28 2011 Raid Level : raid5 Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB) Array Size : 5860540224 (5589.05 GiB 6001.19 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 2 Update Time : Fri Apr 8 22:50:03 2011 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Checksum : 71137458 - correct Events : 144 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 2 2 8 65 2 active sync /dev/sde1 3 3 0 0 3 faulty removed
当我尝试将sdf1添加到/ dev / md2数组中时,出现一个繁忙的错误:
root@dwight:~# mdadm --add /dev/md2 /dev/sdf1 mdadm: Cannot open /dev/sdf1: Device or resource busy
帮帮我! 如何将sdf1添加到md2数组中?
谢谢,
mdadm -S /dev/md_d2 ,然后尝试添加sdf1。