2.6.38-8-server , 2.6.38-8-server 5 SATA drives ; 4 Samsung , 1 Western Digital ; 每个500 GB LSI SAS 9201-16i主机总线适配器卡 mdadm软件。 其他两个数组( /dev/md1 , /dev/md2 )没有问题。 所以我的袭击是敬酒。 在这一点上,我几乎没有深度,所以我希望这里有人能指引我一些好的方向。 正如我在下面提到的,我已经在这个16个小时左右(rest一下,清除了头脑!)我一直在读这里和其他地方的一切。 大部分的build议都是一样的,并不令人鼓舞,但是我希望能够吸引更多知识渊博的人。
所以…昨天我试图添加一个额外的驱动器到我的RAID 5arrays。 为此,我closures了盒子,插入了新的驱动器,并重新给机器供电。 迄今为止都很好。
然后,我卸载arrays
% sudo umount /dev/md0
并继续进行文件系统检查。
% sudo e2fsck -f /dev/md0
一切顺利。
我在新驱动器/dev/sdh1上创build了一个主分区,并将其设置为inputLinux raid autodetect /dev/sdh1 。 写到磁盘并退出。
我添加了新的驱动器与arrays
% sudo mdadm --add /dev/md0 /dev/sdh1
并跟着它
sudo mdadm --grow --raid-devices=5 --backup-file=/home/foundation/grow_md0.bak /dev/md0
(如果因为备份而在这一点上充满了希望,不要这样,这个文件在我的文件系统上不存在,但是我记得键入了它,而且在我的bash历史logging中)
再次,一切似乎都很好。 我让它坐下来,而它做的事情。 一旦完成,没有任何错误,我再次运行e2fsck -f /dev/md0 。 仍然没有什么不寻常的。 在这一点上,我有足够的信心调整它的大小。
% sudo resize2fs /dev/md0
这完成没有窥视。 为了完整起见,我closures了盒子,等待它重新开始。
在尝试挂载分区时引导失败。 数组的组装工作似乎毫无困难,但安装失败,无法findEXT4文件系统。
dmesg的一部分如下:
# [ 9.237762] md: bind<sdh1> # [ 9.246063] md: bind<sdo> # [ 9.248308] md: bind<sdn> # [ 9.249661] bio: create slab <bio-1> at 1 # [ 9.249668] md/raid0:md2: looking at sdn # [ 9.249669] md/raid0:md2: comparing sdn(1953524992) with sdn(1953524992) # [ 9.249671] md/raid0:md2: END # [ 9.249672] md/raid0:md2: ==> UNIQUE # [ 9.249673] md/raid0:md2: 1 zones # [ 9.249674] md/raid0:md2: looking at sdo # [ 9.249675] md/raid0:md2: comparing sdo(1953524992) with sdn(1953524992) # [ 9.249676] md/raid0:md2: EQUAL # [ 9.249677] md/raid0:md2: FINAL 1 zones # [ 9.249679] md/raid0:md2: done. # [ 9.249680] md/raid0:md2: md_size is 3907049984 sectors. # [ 9.249681] md2 configuration # [ 9.249682] zone0=[sdn/sdo/] # [ 9.249683] zone offset=0kb device offset=0kb size=1953524992kb # [ 9.249684] # [ 9.249685] # [ 9.249690] md2: detected capacity change from 0 to 2000409591808 # [ 9.250162] sd 2:0:7:0: [sdk] Write Protect is off # [ 9.250164] sd 2:0:7:0: [sdk] Mode Sense: 73 00 00 08 # [ 9.250331] md2: unknown partition table # [ 9.252371] sd 2:0:7:0: [sdk] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA # [ 9.252642] sd 2:0:9:0: [sdm] Write Protect is off # [ 9.252644] sd 2:0:9:0: [sdm] Mode Sense: 73 00 00 08 # [ 9.254798] sd 2:0:9:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA # [ 9.256555] sdg: sdg1 # [ 9.261439] sd 2:0:8:0: [sdl] Write Protect is off # [ 9.261441] sd 2:0:8:0: [sdl] Mode Sense: 73 00 00 08 # [ 9.263594] sd 2:0:8:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA # [ 9.302372] sdf: sdf1 # [ 9.310770] md: bind<sdd1> # [ 9.317153] sdj: sdj1 # [ 9.327325] sdi: sdi1 # [ 9.327686] md: bind<sde1> # [ 9.372897] sd 2:0:3:0: [sdg] Attached SCSI disk # [ 9.391630] sdm: sdm1 # [ 9.397435] sdk: sdk1 # [ 9.400372] sdl: sdl1 # [ 9.424751] sd 2:0:6:0: [sdj] Attached SCSI disk # [ 9.439342] sd 2:0:5:0: [sdi] Attached SCSI disk # [ 9.450533] sd 2:0:2:0: [sdf] Attached SCSI disk # [ 9.464315] md: bind<sdg1> # [ 9.534946] md: bind<sdj1> # [ 9.541004] md: bind<sdf1> [ 9.542537] md/raid:md0: device sdf1 operational as raid disk 2 [ 9.542538] md/raid:md0: device sdg1 operational as raid disk 3 [ 9.542540] md/raid:md0: device sde1 operational as raid disk 1 [ 9.542541] md/raid:md0: device sdd1 operational as raid disk 0 [ 9.542879] md/raid:md0: allocated 5334kB [ 9.542918] md/raid:md0: raid level 5 active with 4 out of 5 devices, algorithm 2 [ 9.542923] RAID conf printout: [ 9.542924] --- level:5 rd:5 wd:4 [ 9.542925] disk 0, o:1, dev:sdd1 [ 9.542926] disk 1, o:1, dev:sde1 [ 9.542927] disk 2, o:1, dev:sdf1 [ 9.542927] disk 3, o:1, dev:sdg1 [ 9.542928] disk 4, o:1, dev:sdh1 [ 9.542944] md0: detected capacity change from 0 to 2000415883264 [ 9.542959] RAID conf printout: [ 9.542962] --- level:5 rd:5 wd:4 [ 9.542963] disk 0, o:1, dev:sdd1 [ 9.542964] disk 1, o:1, dev:sde1 [ 9.542965] disk 2, o:1, dev:sdf1 [ 9.542966] disk 3, o:1, dev:sdg1 [ 9.542967] disk 4, o:1, dev:sdh1 [ 9.542968] RAID conf printout: [ 9.542969] --- level:5 rd:5 wd:4 [ 9.542970] disk 0, o:1, dev:sdd1 [ 9.542971] disk 1, o:1, dev:sde1 [ 9.542972] disk 2, o:1, dev:sdf1 [ 9.542972] disk 3, o:1, dev:sdg1 [ 9.542973] disk 4, o:1, dev:sdh1 [ 9.543005] md: recovery of RAID array md0 [ 9.543007] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 9.543008] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 9.543013] md: using 128k window, over a total of 488382784 blocks. [ 9.543014] md: resuming recovery of md0 from checkpoint. # [ 9.549495] sd 2:0:9:0: [sdm] Attached SCSI disk # [ 9.555022] sd 2:0:8:0: [sdl] Attached SCSI disk # [ 9.555612] sd 2:0:7:0: [sdk] Attached SCSI disk # [ 9.561410] md: bind<sdi1> [ 9.565538] md0: unknown partition table # [ 9.639444] md: bind<sdm1> # [ 9.642729] md: bind<sdk1> # [ 9.650048] md: bind<sdl1> # [ 9.652342] md/raid:md1: device sdl1 operational as raid disk 3 # [ 9.652343] md/raid:md1: device sdk1 operational as raid disk 2 # [ 9.652345] md/raid:md1: device sdm1 operational as raid disk 4 # [ 9.652346] md/raid:md1: device sdi1 operational as raid disk 0 # [ 9.652347] md/raid:md1: device sdj1 operational as raid disk 1 # [ 9.652627] md/raid:md1: allocated 5334kB # [ 9.652654] md/raid:md1: raid level 5 active with 5 out of 5 devices, algorithm 2 # [ 9.652655] RAID conf printout: # [ 9.652656] --- level:5 rd:5 wd:5 # [ 9.652657] disk 0, o:1, dev:sdi1 # [ 9.652658] disk 1, o:1, dev:sdj1 # [ 9.652658] disk 2, o:1, dev:sdk1 # [ 9.652659] disk 3, o:1, dev:sdl1 # [ 9.652660] disk 4, o:1, dev:sdm1 # [ 9.652676] md1: detected capacity change from 0 to 3000614518784 # [ 9.654507] md1: unknown partition table # [ 11.093897] vesafb: framebuffer at 0xfd000000, mapped to 0xffffc90014200000, using 1536k, total 1536k # [ 11.093899] vesafb: mode is 1024x768x16, linelength=2048, pages=0 # [ 11.093901] vesafb: scrolling: redraw # [ 11.093903] vesafb: Truecolor: size=0:5:6:5, shift=0:11:5:0 # [ 11.094010] Console: switching to colour frame buffer device 128x48 # [ 11.206677] fb0: VESA VGA frame buffer device # [ 11.301061] EXT4-fs (sda1): re-mounted. Opts: user_xattr,errors=remount-ro # [ 11.428472] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: user_xattr # [ 11.896204] EXT4-fs (sdc6): mounted filesystem with ordered data mode. Opts: user_xattr # [ 12.262728] r8169 0000:01:00.0: eth0: link up # [ 12.263975] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready # [ 13.528097] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: user_xattr # [ 13.681339] EXT4-fs (md2): mounted filesystem with ordered data mode. Opts: user_xattr # [ 14.310098] EXT4-fs (md1): mounted filesystem with ordered data mode. Opts: user_xattr # [ 14.357675] EXT4-fs (sdc5): mounted filesystem with ordered data mode. Opts: user_xattr # [ 16.933348] audit_printk_skb: 9 callbacks suppressed # [ 22.350011] eth0: no IPv6 routers present # [ 27.094760] ppdev: user-space parallel port driver # [ 27.168812] kvm: Nested Virtualization enabled # [ 27.168814] kvm: Nested Paging enabled # [ 30.383664] EXT4-fs (sda1): re-mounted. Opts: user_xattr,errors=remount-ro,commit=0 # [ 30.385125] EXT4-fs (sdb1): re-mounted. Opts: user_xattr,commit=0 # [ 32.105044] EXT4-fs (sdc6): re-mounted. Opts: user_xattr,commit=0 # [ 33.078017] EXT4-fs (sdc1): re-mounted. Opts: user_xattr,commit=0 # [ 33.079491] EXT4-fs (md2): re-mounted. Opts: user_xattr,commit=0 # [ 33.082411] EXT4-fs (md1): re-mounted. Opts: user_xattr,commit=0 # [ 35.369796] EXT4-fs (sdc5): re-mounted. Opts: user_xattr,commit=0 # [ 35.674390] CE: hpet increased min_delta_ns to 20113 nsec # [ 35.676242] CE: hpet increased min_delta_ns to 30169 nsec # [ 35.677808] CE: hpet increased min_delta_ns to 45253 nsec # [ 35.679349] CE: hpet increased min_delta_ns to 67879 nsec # [ 35.680312] CE: hpet increased min_delta_ns to 101818 nsec # [ 35.680312] CE: hpet increased min_delta_ns to 152727 nsec # [ 35.680312] CE: hpet increased min_delta_ns to 229090 nsec # [ 35.680312] CE: hpet increased min_delta_ns to 343635 nsec # [ 35.681590] CE: hpet increased min_delta_ns to 515452 nsec # [ 436.595366] EXT4-fs (md2): mounted filesystem with ordered data mode. Opts: user_xattr # [ 607.364501] exe (14663): /proc/14663/oom_adj is deprecated, please use /proc/14663/oom_score_adj instead. [ 2016.476772] EXT4-fs (md0): VFS: Can't find ext4 filesystem [ 2246.923154] EXT4-fs (md0): VFS: Can't find ext4 filesystem [ 2293.383934] EXT4-fs (md0): VFS: Can't find ext4 filesystem [ 2337.292080] EXT4-fs (md0): VFS: Can't find ext4 filesystem [ 2364.812150] EXT4-fs (md0): VFS: Can't find ext4 filesystem [ 2392.624988] EXT4-fs (md0): VFS: Can't find ext4 filesystem # [ 3098.003646] CE: hpet increased min_delta_ns to 773178 nsec [ 4208.380943] md: md0: recovery done. [ 4208.470356] RAID conf printout: [ 4208.470363] --- level:5 rd:5 wd:5 [ 4208.470369] disk 0, o:1, dev:sdd1 [ 4208.470374] disk 1, o:1, dev:sde1 [ 4208.470378] disk 2, o:1, dev:sdf1 [ 4208.470382] disk 3, o:1, dev:sdg1 [ 4208.470385] disk 4, o:1, dev:sdh1 [ 7982.600595] EXT4-fs (md0): VFS: Can't find ext4 filesystem
在启动期间,它问我想要做什么。 我告诉它继续前进,并开始处理它,一旦机器备份。 我做的第一件事是检查/proc/mdstat …
# Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] # md1 : active raid5 sdl1[3] sdk1[2] sdm1[4] sdi1[0] sdj1[1] # 2930287616 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU] # # md2 : active raid0 sdn[0] sdo[1] # 1953524992 blocks 64k chunks md0 : active raid5 sdf1[2] sdg1[3] sde1[1] sdd1[0] sdh1[5] 1953531136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] # # unused devices: <none>
…和/etc/mdadm/mdadm.conf :
ARRAY /dev/md0 level=raid5 num-devices=5 UUID=98941898:e5652fdb:c82496ec:0ebe2003 # ARRAY /dev/md1 level=raid5 num-devices=5 UUID=67d5a3ed:f2890ea4:004365b1:3a430a78 # ARRAY /dev/md2 level=raid0 num-devices=2 UUID=d1ea9162:cb637b4b:004365b1:3a430a78
然后我检查了fdisk :
foundation@foundation:~$ sudo fdisk -l /dev/sd[defgh] Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000821e5 Device Boot Start End Blocks Id System /dev/sdd1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sde: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00004a72 Device Boot Start End Blocks Id System /dev/sde1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdf: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000443c2 Device Boot Start End Blocks Id System /dev/sdf1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdg: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0000e428 Device Boot Start End Blocks Id System /dev/sdg1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdh: 500.1 GB, 500107862016 bytes 81 heads, 63 sectors/track, 191411 cylinders Units = cylinders of 5103 * 512 = 2612736 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8c4d0ecf Device Boot Start End Blocks Id System /dev/sdh1 1 191412 488385560 fd Linux raid autodetect
一切似乎都是按顺序进行的,所以我检查了arrays的细节并检查了它的组成部分。
foundation@foundation:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 13 00:57:15 2011 Raid Level : raid5 Array Size : 1953531136 (1863.03 GiB 2000.42 GB) Used Dev Size : 488382784 (465.76 GiB 500.10 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Fri May 13 04:43:10 2011 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : foundation:0 (local to host foundation) UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd Events : 32 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1 2 8 81 2 active sync /dev/sdf1 3 8 97 3 active sync /dev/sdg1 5 8 113 4 active sync /dev/sdh1 foundation@foundation:~$ sudo mdadm --examine /dev/sd[defgh]1 /dev/sdd1: (samsung) Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd Name : foundation:0 (local to host foundation) Creation Time : Fri May 13 00:57:15 2011 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB) Array Size : 3907062272 (1863.03 GiB 2000.42 GB) Used Dev Size : 976765568 (465.76 GiB 500.10 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 6e6422de:f39c618a:2cab1161:b36c8341 Update Time : Fri May 13 15:53:06 2011 Checksum : 679bf575 - correct Events : 32 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAA ('A' == active, '.' == missing) /dev/sde1: (samsung) Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd Name : foundation:0 (local to host foundation) Creation Time : Fri May 13 00:57:15 2011 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB) Array Size : 3907062272 (1863.03 GiB 2000.42 GB) Used Dev Size : 976765568 (465.76 GiB 500.10 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : bd02892c:a346ec88:7ffcf757:c18eee12 Update Time : Fri May 13 15:53:06 2011 Checksum : 7cdeb0d5 - correct Events : 32 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAA ('A' == active, '.' == missing) /dev/sdf1: (samsung) Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd Name : foundation:0 (local to host foundation) Creation Time : Fri May 13 00:57:15 2011 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB) Array Size : 3907062272 (1863.03 GiB 2000.42 GB) Used Dev Size : 976765568 (465.76 GiB 500.10 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : acd3d576:54c09121:0636980e:0a490f59 Update Time : Fri May 13 15:53:06 2011 Checksum : 5c91ef46 - correct Events : 32 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAAA ('A' == active, '.' == missing) /dev/sdg1: (samsung) Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd Name : foundation:0 (local to host foundation) Creation Time : Fri May 13 00:57:15 2011 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB) Array Size : 3907062272 (1863.03 GiB 2000.42 GB) Used Dev Size : 976765568 (465.76 GiB 500.10 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 5f923d06:993ac9f3:a41ffcde:73876130 Update Time : Fri May 13 15:53:06 2011 Checksum : 65e75047 - correct Events : 32 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAA ('A' == active, '.' == missing) /dev/sdh1: (western digital) Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd Name : foundation:0 (local to host foundation) Creation Time : Fri May 13 00:57:15 2011 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 976769072 (465.76 GiB 500.11 GB) Array Size : 3907062272 (1863.03 GiB 2000.42 GB) Used Dev Size : 976765568 (465.76 GiB 500.10 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 622c546d:41fe9683:42ecf909:cebcf6a4 Update Time : Fri May 13 15:53:06 2011 Checksum : fc5ebc1a - correct Events : 32 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing)
我试图自己安装它:
foundation@foundation:~$ sudo mount -t ext4 -o defaults,rw /dev/md0 mnt mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
不行。 所以在这一点上,我开始尝试做一些这里和其他地方提出的各种post的东西。 第一件事就是做一个e2fsck 。
foundation@foundation:~$ sudo e2fsck -f /dev/md0 e2fsck 1.41.14 (22-Dec-2010) e2fsck: Superblock invalid, trying backup blocks... e2fsck: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
正如上面的build议回应我正在读的东西,我试了一下。
foundation@foundation:~$ sudo mke2fs -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=64 blocks 122101760 inodes, 488382784 blocks 24419139 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 foundation@foundation:~$ sudo e2fsck -fb 32768 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) e2fsck: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
并重复每个报告的超级块备份。 没有啊。 我也尝试安装它,同时指向一些网站build议的不同部分…
$ sudo mount -t ext4 -o sb=( (4096 / 1024) * 32768 ),ro /dev/md0 mnt
foundation@foundation:~$ sudo mount -t ext4 -o sb=131072,ro /dev/md0 mnt mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
在这一点上,我testdisk发现了几个提到testdisk的Christophe Grenier的testing磁盘的post,并给了它一个提示。 看起来很有前途,但也找不到有用的东西。 它甚至不能看到任何文件。 (虽然photorec做了,在这一点上,我确实做了一个“恢复”,只有它的段错误约五分之一)。
在被设置好testdisk之后,我走开了,深入search,睡了一会儿。 今天早上我得到这个:
TestDisk 6.11, Data Recovery Utility, April 2009 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/md0 - 2000 GB / 1863 GiB - CHS 488382784 2 4 Partition Start End Size in sectors 1 D Linux 98212 0 1 73360627 1 4 586099328 2 D Linux 98990 0 1 73361405 1 4 586099328 3 D Linux 99006 0 1 73361421 1 4 586099328 4 D Linux 99057 0 1 73361472 1 4 586099328 5 D Linux 99120 0 1 73361535 1 4 586099328 6 D Linux 182535942 0 1 426669713 1 4 1953070176 7 D Linux 182536009 0 1 426669780 1 4 1953070176 8 D Linux 182536470 0 1 426670241 1 4 1953070176 9 D Linux 182538637 0 1 426672408 1 4 1953070176 10 D Linux 204799120 0 1 326894735 1 4 976764928 Structure: Ok. Use Up/Down Arrow keys to select partition. Use Left/Right Arrow keys to CHANGE partition characteristics: *=Primary bootable P=Primary L=Logical E=Extended D=Deleted Keys A: add partition, L: load backup, T: change type, P: list files, Enter: to continue
前几次使用testdisk我没有让它完成一个完整的search,所以我从来没有看到这个更完整的列表之前。
为了识别,分区列表左边的数字是我的。 在应用程序的底部是一种描述性的信息,当我专注于一个特定的分区时,这种信息就发生了变化。 下面列出了与分区匹配的数字。
1, 2, 3, 4, 5 EXT3 Large file Sparse superblock Recover, 300 GB / 279 GiB 6, 7, 8, 9 EXT4 Large file Sparse superblock Recover, 999 GB / 931 GiB 10 EXT3 Large file Sparse superblock Backup superblock, 500 GB / 465 GiB
即使拥有更详细的分区列表也没有什么区别。 它无法从列出的分区中读取任何文件。
那就是我现在所在的地方。 我没有其他途径采取我能find的。 这里的数据是重要的,因为这是我的未婚夫摄影驱动器。 我知道raid不是备份,现在很清楚,但我不认为我能买得起一个可以包含一个或者两个Tera字节的备份解决scheme。 这就是我将在未来几天看到的。 同时,我希望你们能原谅我们的文字摩天大楼,并帮助我find解决办法。
哦,还有一件事…testingtestdisk最后一个分区列表的configuration对我来说似乎非常可疑。 这是一组五个可能的分区,其后是五个可能的分区。 匹配arrays中的设备数量。 也许这是一个线索。
现在你可能想看看ddrescue ,看看是否有任何东西可以恢复…
我想你跳了枪,并在完成之前调整了FS。 在dmesg输出中,您可以看到重新启动后必须完成整形:
[ 9.542918] md/raid:md0: raid level 5 active with 4 out of 5 devices, algorithm 2 [ 9.543005] md: recovery of RAID array md0 [ 9.543007] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 9.543008] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 9.543013] md: using 128k window, over a total of 488382784 blocks. [ 9.543014] md: resuming recovery of md0 from checkpoint. [ 4208.380943] md: md0: recovery done.
编辑:数据最有可能消失了。