升级到SLES11 SP1之后 – 在软件RAID5之上的LVM不再工作了

在我运行OpenSuSE11.1一段时间的系统上,我执行了全新的SLES11 SP1安装。 系统使用了一个软件RAID5系统,在这个系统的顶部安装了一个大约2.5TB大小的单个分区的LVM,用于挂载/数据。

问题是SLES11.1不能识别软件RAID,因此我无法安装LVM。

这是vgdisplay和pvdisplay的输出:

$ vgdisplay --- Volume group --- VG Name vg001 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TB PE Size 4.00 MB Total PE 715402 Alloc PE / Size 715402 / 2.73 TB Free PE / Size 0 / 0 VG UUID Aryj93-QgpG-8V1S-qGV7-gvFk-GKtc-OTmuFk $ pvdisplay --- Physical volume --- PV Name /dev/md0 VG Name vg001 PV Size 2.73 TB / not usable 896.00 KB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 715402 Free PE 0 Allocated PE 715402 PV UUID rIpmyi-GmB9-oybx-pwJr-50YZ-GQgQ-myGjhi 

PE信息表明卷的大小被识别,但物理卷不可访问。

软件RAID似乎运行正常,它是从以下mdadm.conf汇编,然后输出md0设备的mdadm诊断信息以及assemly使用的设备:

 $ cat /etc/mdadm.conf DEVICE partitions ARRAY /dev/md0 auto=no level=raid5 num-devices=4 UUID=a0340426:324f0a4f:2ce7399e:ae4fabd0 $ cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1] 2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> $ mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Tue Oct 27 13:04:40 2009 Raid Level : raid5 Array Size : 2930287488 (2794.54 GiB 3000.61 GB) Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Feb 27 14:55:46 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0 Events : 0.20 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde $ mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 0.90.00 UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0 Creation Time : Tue Oct 27 13:04:40 2009 Raid Level : raid5 Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Array Size : 2930287488 (2794.54 GiB 3000.61 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Mon Feb 27 14:55:46 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 2b5182e8 - correct Events : 20 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 16 0 active sync /dev/sdb 0 0 8 16 0 active sync /dev/sdb 1 1 8 32 1 active sync /dev/sdc 2 2 8 48 2 active sync /dev/sdd 3 3 8 64 3 active sync /dev/sde $ mdadm --examine /dev/sdc /dev/sdc: Magic : a92b4efc Version : 0.90.00 UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0 Creation Time : Tue Oct 27 13:04:40 2009 Raid Level : raid5 Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Array Size : 2930287488 (2794.54 GiB 3000.61 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Mon Feb 27 14:55:46 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 2b5182fa - correct Events : 20 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 32 1 active sync /dev/sdc 0 0 8 16 0 active sync /dev/sdb 1 1 8 32 1 active sync /dev/sdc 2 2 8 48 2 active sync /dev/sdd 3 3 8 64 3 active sync /dev/sde $ mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 0.90.00 UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0 Creation Time : Tue Oct 27 13:04:40 2009 Raid Level : raid5 Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Array Size : 2930287488 (2794.54 GiB 3000.61 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Mon Feb 27 14:55:46 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 2b51830c - correct Events : 20 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 2 8 48 2 active sync /dev/sdd 0 0 8 16 0 active sync /dev/sdb 1 1 8 32 1 active sync /dev/sdc 2 2 8 48 2 active sync /dev/sdd 3 3 8 64 3 active sync /dev/sde $ mdadm --examine /dev/sde /dev/sde: Magic : a92b4efc Version : 0.90.00 UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0 Creation Time : Tue Oct 27 13:04:40 2009 Raid Level : raid5 Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Array Size : 2930287488 (2794.54 GiB 3000.61 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Mon Feb 27 14:55:46 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 2b51831e - correct Events : 20 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 64 3 active sync /dev/sde 0 0 8 16 0 active sync /dev/sdb 1 1 8 32 1 active sync /dev/sdc 2 2 8 48 2 active sync /dev/sdd 3 3 8 64 3 active sync /dev/sde 

我唯一怀疑的是启动后自动创build的/dev/md0p1分区 – 这看起来不对。 它似乎被视为另一个软件RAID设备,但对我来说,它看起来像这是md0 RAID设备的奇偶校验区域:

 $ mdadm --detail /dev/md0p1 /dev/md0p1: Version : 0.90 Creation Time : Tue Oct 27 13:04:40 2009 Raid Level : raid5 Array Size : 976752000 (931.50 GiB 1000.19 GB) Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Feb 27 15:43:00 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0 Events : 0.20 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 

在SLES的YaSTpipe理的分区工具中,软件RAID设备列在通用硬盘部分下,而不在RAID部分下。 md0p1分区列在md0磁盘的分区表中。

操作系统无法正确识别软件RAID磁盘? 还是这是LVMconfiguration的问题?

任何想法如何解决这个问题?