我们在我们的群集上安装了Rocks 5.5发行版,并且想升级到6.2。 “/ export”分区安装在2个相同硬盘驱动器(HD)(/ dev / sdb和/ dev / sdc)的英特尔软件RAID 0arrays上,其余分区即“/”,“/ var”和在单独的(引导)HD或“/ dev / sda”上“交换”。
第一次升级尝试失败,因为警告说已经find房屋署的突袭资料,不能继续进行。 我天真地解决了这个问题,执行:
dmraid -r -E /dev/sda
错误没有再显示,我能够执行升级。 使用手动分区,我格式化了引导HD,并保持RAIDarrays未格式化并重新安装为“/ export”。
安装完成后,启动过程失败,说
ERROR: ddf1: Cannot find physical drive description on /dev/sdc! ERROR: ddf1: setting up RAID device /dev/sdc ERROR: ddf1: Cannot find physical drive description on /dev/sdb! ERROR: ddf1: setting up RAID device /dev/sdb /export1: The filesystem size (according to the superblock) is 488378000 blocks The physical size of the device is 244190638 blocks Either the superblock or partition table is likely to be corrupt!
在“救援”模式下启动Rock,我们可以再次安装硬盘,尽pipe它说RAID分区并没有被清除干净。
“dmraid”显示RAIDarrays,虽然有错误:
$ dmraid -r ERROR: ddf1: Cannot find physical drive description on /dev/sdc! ERROR: ddf1: setting up RAID device /dev/sdc ERROR: ddf1: Cannot find physical drive description on /dev/sdb! ERROR: ddf1: setting up RAID device /dev/sdb /dev/sdc: isw, "isw_eecceiche", GROUP, ok, 1953525166 sectors, data@ 0 /dev/sdb: isw, "isw_eecceiche", GROUP, ok, 1953525166 sectors, data@ 0 $ dmraid -s ERROR: ddf1: Cannot find physical drive description on /dev/sdc! ERROR: ddf1: setting up RAID device /dev/sdc ERROR: ddf1: Cannot find physical drive description on /dev/sdb! ERROR: ddf1: setting up RAID device /dev/sdb *** Group superset isw_eecceiche --> Active Subset name : isw_eecceiche_Volume0 size : 3907038720 stride : 256 type : stripe status : ok subsets: 0 devs : 2 spares : 0
这里是“fstab”:
# # /etc/fstab # Created by anaconda on Tue Sep 15 17:35:11 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=90471019-650c-4901-a8f1-e8cce3fbc059 / ext4 defaults 1 1 UUID=5dae925e-6e01-4442-8f5b-07bfbde7ff09 /export ext2 defaults 1 2 UUID=18303228-189f-4fa3-9661-71786323d70d /var ext4 defaults 1 2 UUID=d14c42ec-e41a-4dbd-b6b8-60afb4aa1b14 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 # The ram-backed filesystem for ganglia RRD graph databases. tmpfs /var/lib/ganglia/rrds tmpfs size=2045589000,gid=nobody,uid=nobody,defaults 1 0
和“blkid”:
/dev/loop0: TYPE="squashfs" /dev/sda1: UUID="90471019-650c-4901-a8f1-e8cce3fbc059" TYPE="ext4" /dev/sda2: UUID="18303228-189f-4fa3-9661-71786323d70d" TYPE="ext4" /dev/sda3: UUID="d14c42ec-e41a-4dbd-b6b8-60afb4aa1b14" TYPE="swap" /dev/sdb: UUID="M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?" TYPE="ddf_raid_member" /dev/sdc: UUID="M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?" TYPE="ddf_raid_member" /dev/sde1: LABEL="Expansion Drive" UUID="BC448C59448C1872" TYPE="ntfs" /dev/sdd1: UUID="66F2-41D7" TYPE="vfat" /dev/mapper/isw_eecceiche_Volume0p1: LABEL="/export1" UUID="5dae925e-6e01-4442-8f5b-07bfbde7ff09" TYPE="ext2"
还有“fdisk”输出:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x44f45cd4 Device Boot Start End Blocks Id System /dev/sda1 * 1 111403 894841856 83 Linux /dev/sda2 111403 119562 65536000 83 Linux /dev/sda3 119562 121602 16382976 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00045387 Device Boot Start End Blocks Id System /dev/sdb1 * 1 243201 1953512001 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 6 heads, 1 sectors/track, 325587528 cylinders Units = cylinders of 6 * 512 = 3072 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdc1 22094 22341 743+ cf Unknown /dev/sdc2 ? 1 1 0 0 Empty Partition 2 does not end on cylinder boundary. /dev/sdc3 357936035 357936283 743+ cf Unknown /dev/sdc4 1 1 0 0 Empty Partition 4 does not end on cylinder boundary. Disk /dev/mapper/isw_eecceiche_Volume0: 2000.4 GB, 2000403824640 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 131072 bytes / 262144 bytes Disk identifier: 0x00045387 Device Boot Start End Blocks Id System /dev/mapper/isw_eecceiche_Volume0p1 * 1 243201 1953512001 83 Linux Partition 1 does not start on physical sector boundary. Disk /dev/mapper/isw_eecceiche_Volume0p1: 2000.4 GB, 2000396289024 bytes 255 heads, 63 sectors/track, 243200 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 131072 bytes / 262144 bytes Alignment offset: 98816 bytes Disk identifier: 0x00000000
我们可以访问“/ export”分区,其数据在“rescue”模式下仍然可用。
我很高兴知道是否有方法来重新构buildRAID元数据而不是格式化或删除/重buildarrays。
任何forms的帮助解决这个问题深表感谢。