服务器被意外打翻后,GNU / Linux服务器上的RAID1将不再启动。 是的,我在场外备份了所有重要的东西,但是将其全部取回是不方便的,所以想要尝试从发生故障的/发生故障的磁盘arrays进行恢复。
RAID1由两个镜像的2TB驱动器组成。 服务器被打翻后,我可以使用其中一个RAID驱动器
mdadm --assemble --scan
我可以看到我的文件,所以我想我会很快去买两个replace驱动器,并开始重buildarrays。
但是,到了replace的时候,驱动器已经退化了。 我能够使用dd将/ dev / sdb复制到其中一个新的硬盘上。 磁盘开始时(前几兆字节)出现了几个IO错误,驱动器中间出现了几个IO错误,但大多数似乎都成功复制。
现在mdadm不检测/ dev / sdb上的任何RAID文件系统。 它确实检测到RAID文件系统/ dev / sdc,但该驱动器已损坏,甚至无法列出任何文件。
我的问题是,我可以以某种方式结合从最好,但不能检测的RAID / dev / sdb的数据与最坏的,但至less可以检测RAID / dev / SDC? 我能以某种方式使用RAID设备/ dev / md0上的fdisk输出来告诉新的硬盘驱动器真的有一个有效的文件系统吗?
这里是fdisk的结果,包括在RAID设备/ dev / md0上运行
fdisk -l /dev/sdb Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00055ba4 Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 3907028991 3907026944 1.8T fd Linux raid autodetect fdisk -l /dev/sdc Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00055ba4 Device Boot Start End Sectors Size Id Type /dev/sdc1 * 2048 3907028991 3907026944 1.8T fd Linux raid autodetect fdisk -l /dev/md0 Disk /dev/md0: 1.8 TiB, 2000263577600 bytes, 3906764800 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x1b40b19b
使用ddrescue将一些几乎不可读的/ dev / sdc复制到备用驱动器/ dev / sdd后,我可以在/ dev / sdd上使用我的raid汇编命令来成功启动raid。
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 1 7.5G 0 disk └─sda1 8:1 1 7.5G 0 part /cdrom sdb 8:16 0 1.8T 0 disk └─sdb1 8:17 0 1.8T 0 part sdc 8:32 0 1.8T 0 disk └─sdc1 8:33 0 1.8T 0 part sdd 8:48 0 2.7T 0 disk └─sdd1 8:49 0 1.8T 0 part └─md0 9:0 0 1.8T 0 raid1 ├─md0p1 259:0 0 1.8T 0 md └─md0p2 259:1 0 4.9G 0 md loop0 7:0 0 1.4G 1 loop /rofs
所以我们现在看到两个分区。 md0p1和md0p2。 md0p1是一个ext4文件系统,md0p2是一些linux交换。
mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Aug 17 06:35:12 2015 Raid Level : raid1 Array Size : 1953382400 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Oct 24 12:27:40 2017 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : sysresccd:0 UUID : 26f2462f:1c4efdac:587b912b:1d30f3c8 Events : 4585024 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 2 0 0 2 removed
我无法挂载md0p1,因为它一直在抱怨无效的超级块。 但是,当我做了这个命令:
e2fsck -b 32768 /dev/md0p1
然后,我可以安装它,并看到一个基本的根结构。
root@ubuntu:/# mount /dev/md0p1 /media/f root@ubuntu:/# cd /media/f root@ubuntu:/media/f# ls ls: cannot access 'sbin': Structure needs cleaning ls: cannot access 'usr': Structure needs cleaning ls: cannot access 'srv': Structure needs cleaning ls: cannot access 'v': Structure needs cleaning ls: cannot access 'lib': Structure needs cleaning ls: cannot access 'lib64': Structure needs cleaning ls: cannot access 'a': Structure needs cleaning ls: cannot access 'var': Structure needs cleaning ls: cannot access 'root': Structure needs cleaning ls: cannot access 'media': Structure needs cleaning ls: cannot access 'boot': Structure needs cleaning ls: cannot access 'dev': Structure needs cleaning ls: cannot access 'tmp': Structure needs cleaning ls: cannot access 'home': Structure needs cleaning ls: cannot access 'etc': Structure needs cleaning ls: cannot access 'run': Structure needs cleaning ls: cannot access 'opt': Structure needs cleaning ls: cannot access 'bin': Structure needs cleaning ls: cannot access 'sys': Structure needs cleaning ls: cannot access 'proc': Structure needs cleaning ls: cannot access 'mnt': Structure needs cleaning a home lib mnt run undef.log vmlinuz bin initrd.img lib64 opt sbin usr vmlinuz.old boot initrd.img.old libnss3.so pictures srv v dev lost+found proc sys var etc media root tmp videos
现在,我的问题是:我可以从/ dev / sdc复制RAID超级块,并以某种方式强制它到我的新驱动器(这是/ dev / sdb的副本,但没有RAID超级块)? 或者我只是尝试使用新驱动器重新创buildRAID,并希望它会检测到它是RAID的一部分,而不是破坏我的数据?