我的袭击已经失败了,我不确定采取什么最好的措施是为了最好的尝试恢复它。
我在raid5configuration中有4个驱动器。 看起来好像是一个失败( sde1 ),但md不能把数组,因为它说sdd1 不新鲜
有什么我可以做恢复arrays?
我已经在/var/log/messages和mdadm --examine下面添加了一些摘录mdadm --examine :
/var/log/messages
$ egrep -w sd[b,c,d,e]\|raid\|md /var/log/messages
nas kernel: [...] sd 5:0:0:0: [sde] nas kernel: [...] sd 5:0:0:0: [sde] CDB: nas kernel: [...] end_request: I/O error, dev sde, sector 937821218 nas kernel: [...] sd 5:0:0:0: [sde] killing request nas kernel: [...] md/raid:md0: read error not correctable (sector 937821184 on sde1). nas kernel: [...] md/raid:md0: Disk failure on sde1, disabling device. nas kernel: [...] md/raid:md0: Operation continuing on 2 devices. nas kernel: [...] md/raid:md0: read error not correctable (sector 937821256 on sde1). nas kernel: [...] sd 5:0:0:0: [sde] Unhandled error code nas kernel: [...] sd 5:0:0:0: [sde] nas kernel: [...] sd 5:0:0:0: [sde] CDB: nas kernel: [...] end_request: I/O error, dev sde, sector 937820194 nas kernel: [...] sd 5:0:0:0: [sde] Synchronizing SCSI cache nas kernel: [...] sd 5:0:0:0: [sde] nas kernel: [...] sd 5:0:0:0: [sde] Stopping disk nas kernel: [...] sd 5:0:0:0: [sde] START_STOP FAILED nas kernel: [...] sd 5:0:0:0: [sde] nas kernel: [...] md: unbind<sde1> nas kernel: [...] md: export_rdev(sde1) nas kernel: [...] md: bind<sdd1> nas kernel: [...] md: bind<sdc1> nas kernel: [...] md: bind<sdb1> nas kernel: [...] md: bind<sde1> nas kernel: [...] md: kicking non-fresh sde1 from array! nas kernel: [...] md: unbind<sde1> nas kernel: [...] md: export_rdev(sde1) nas kernel: [...] md: kicking non-fresh sdd1 from array! nas kernel: [...] md: unbind<sdd1> nas kernel: [...] md: export_rdev(sdd1) nas kernel: [...] md: raid6 personality registered for level 6 nas kernel: [...] md: raid5 personality registered for level 5 nas kernel: [...] md: raid4 personality registered for level 4 nas kernel: [...] md/raid:md0: device sdb1 operational as raid disk 2 nas kernel: [...] md/raid:md0: device sdc1 operational as raid disk 0 nas kernel: [...] md/raid:md0: allocated 4338kB nas kernel: [...] md/raid:md0: not enough operational devices (2/4 failed) nas kernel: [...] md/raid:md0: failed to run raid set. nas kernel: [...] md: pers->run() failed ...
mdadm --examine
$ mdadm --examine /dev/sd[bcdefghijklmn]1
/dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 4dc53f9d:f0c55279:a9cb9592:a59607c9 Name : NAS:0 Creation Time : Sun Sep 11 02:37:59 2011 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 3907027053 (1863.02 GiB 2000.40 GB) Array Size : 5860538880 (5589.05 GiB 6001.19 GB) Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : e8369dbc:bf591efa:f0ccc359:9d164ec8 Update Time : Tue May 27 18:54:37 2014 Checksum : a17a88c0 - correct Events : 1026050 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AA ('A' == active, '.' == missing) /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 4dc53f9d:f0c55279:a9cb9592:a59607c9 Name : NAS:0 Creation Time : Sun Sep 11 02:37:59 2011 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 3907027053 (1863.02 GiB 2000.40 GB) Array Size : 5860538880 (5589.05 GiB 6001.19 GB) Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 78221e11:02acc1c8:c4eb01bf:f0852cbe Update Time : Tue May 27 18:54:37 2014 Checksum : 1fbb54b8 - correct Events : 1026050 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 4dc53f9d:f0c55279:a9cb9592:a59607c9 Name : NAS:0 Creation Time : Sun Sep 11 02:37:59 2011 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 3907027053 (1863.02 GiB 2000.40 GB) Array Size : 5860538880 (5589.05 GiB 6001.19 GB) Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : fd282483:d2647838:f6b9897e:c216616c Update Time : Mon Oct 7 19:21:22 2013 Checksum : 6df566b8 - correct Events : 32621 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing) /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 4dc53f9d:f0c55279:a9cb9592:a59607c9 Name : NAS:0 Creation Time : Sun Sep 11 02:37:59 2011 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 3907027053 (1863.02 GiB 2000.40 GB) Array Size : 5860538880 (5589.05 GiB 6001.19 GB) Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : e84657dd:0882a7c8:5918b191:2fc3da02 Update Time : Tue May 27 18:46:12 2014 Checksum : 33ab6fe - correct Events : 1026039 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing)
你有一个双驱动器故障,其中一个驱动器已经死了六个月 。 使用RAID5,这是无法恢复的。 更换发生故障的硬件并从备份中恢复。
outlook未来,考虑使用这样的大型驱动器的RAID6,并确保您有适当的监控来捕捉设备故障,以便尽快响应。
那么如果你的备份不是最新的,你可以尝试使用三个驱动器在降级模式下强制重组…
mdadm -v –assemble –force / dev / md0 / dev / sdb1 / dev / sdc1 / dev / sde1
而且,由于sde1只有稍微不同步的更新时间和事件计数,我怀疑你将能够访问你的大部分数据。 在类似的RAID5故障情况下,我已经成功地做了这么多次。