为什么df只显示RAID10arrays的一半大小?

我使用4个75G驱动器创build了一个RAID10arrays,创build了一个150G的存储器。
一切完成后(包括初始同步),除了在指定的挂载点上只显示73G存储的df -h的输出外,一切看起来都不错。

细节:

  • 该机器是Amazon EC2上的m1.large Ubuntu 11.10实例。
  • 这4个驱动器都是EBS驱动器,每个都是75G的大小。
  • RAID10arrays是使用以下脚本创build的:

 #!/bin/sh disk1="/dev/sdh1" disk2="/dev/sdh2" disk3="/dev/sdh3" disk4="/dev/sdh4" echo "*** Verifying existence of 4 volumes $disk1, $disk2, $disk3 and $disk4" if [ -b "$disk1" -a -b "$disk2" -a -b "$disk3" -a -b "$disk4" ]; then echo "# Found expected block devices." else echo "!!! Did not find expected block devices. Error." exit -1 fi until read -p "??? - How big (in GB) are the disks (They should be the same size)? " disk_size && [ $disk_size ]; do echo "Please enter a disk size." done lv_size=$(echo "scale=2; $disk_size * 2.0" | bc) echo "*** Assuming a per disk size of $disk_size gigs, will create a logical volume of $lv_size gigs, with $lv_size reserved for snapshots" echo "*** Partitioning disks..." echo "~ Partitioning $disk1" echo ',,L' | sfdisk $disk1 echo "~ Partitioning $disk2" echo ',,L' | sfdisk $disk2 echo "~ Partitioning $disk3" echo ',,L' | sfdisk $disk3 echo "~ Partitioning $disk4" echo ',,L' | sfdisk $disk4 sleep 6 echo "*** Creating /dev/md0 as a RAID 10" /sbin/mdadm /dev/md0 --create --level=10 --raid-devices=4 $disk1 $disk2 $disk3 $disk4 echo " ~ Allocating /dev/md0 as a physical volume." /sbin/pvcreate /dev/md0 echo " ~ Allocating a Volume Group 'mongodb_vg'" /sbin/vgcreate -s 64M mongodb_vg /dev/md0 echo " ~ Creating a Logical Volume 'mongodb_lv'" num_extents=$(echo "$disk_size * 1000 / 64" | bc) /sbin/lvcreate -l $num_extents -nmongodb_lv mongodb_vg echo " ~ Formatting the new volume (/dev/mongodb_vg/mongodb_lv) with EXT4" /sbin/mkfs.ext4 /dev/mongodb_vg/mongodb_lv echo " ~ Done! Go ahead and mount the new filesystem. Suggested FStab: " echo " /dev/mongodb_vg/mongodb_lv /data ext4 defaults,noatime 0 0" 

这是我得到的输出:

 *** Verifying existence of 4 volumes /dev/xvdh1, /dev/xvdh2, /dev/xvdh3 and /dev/xvdh4 # Found expected block devices. ??? - How big (in GB) are the disks (They should be the same size)? 75 *** Assuming a per disk size of 75 gigs, will create a logical volume of 150.0 gigs, with 150.0 reserved for snapshots *** Partitioning disks... ~ Partitioning /dev/xvdh1 Checking that no-one is using this disk right now ... BLKRRPART: Invalid argument OK Disk /dev/xvdh1: 9790 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/xvdh1: unrecognized partition table type Old situation: No partitions found New situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/xvdh1p1 0+ 9789 9790- 78638174+ 83 Linux /dev/xvdh1p2 0 - 0 0 0 Empty /dev/xvdh1p3 0 - 0 0 0 Empty /dev/xvdh1p4 0 - 0 0 0 Empty Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... BLKRRPART: Invalid argument If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) ~ Partitioning /dev/xvdh2 Checking that no-one is using this disk right now ... BLKRRPART: Invalid argument OK Disk /dev/xvdh2: 9790 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/xvdh2: unrecognized partition table type Old situation: No partitions found New situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/xvdh2p1 0+ 9789 9790- 78638174+ 83 Linux /dev/xvdh2p2 0 - 0 0 0 Empty /dev/xvdh2p3 0 - 0 0 0 Empty /dev/xvdh2p4 0 - 0 0 0 Empty Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... BLKRRPART: Invalid argument If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) ~ Partitioning /dev/xvdh3 Checking that no-one is using this disk right now ... BLKRRPART: Invalid argument OK Disk /dev/xvdh3: 9790 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/xvdh3: unrecognized partition table type Old situation: No partitions found New situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/xvdh3p1 0+ 9789 9790- 78638174+ 83 Linux /dev/xvdh3p2 0 - 0 0 0 Empty /dev/xvdh3p3 0 - 0 0 0 Empty /dev/xvdh3p4 0 - 0 0 0 Empty Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... BLKRRPART: Invalid argument If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) ~ Partitioning /dev/xvdh4 Checking that no-one is using this disk right now ... BLKRRPART: Invalid argument OK Disk /dev/xvdh4: 9790 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/xvdh4: unrecognized partition table type Old situation: No partitions found New situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/xvdh4p1 0+ 9789 9790- 78638174+ 83 Linux /dev/xvdh4p2 0 - 0 0 0 Empty /dev/xvdh4p3 0 - 0 0 0 Empty /dev/xvdh4p4 0 - 0 0 0 Empty Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... BLKRRPART: Invalid argument If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) *** Creating /dev/md0 as a RAID 10 mdadm: partition table exists on /dev/xvdh1 but will be lost or meaningless after creating array mdadm: partition table exists on /dev/xvdh2 but will be lost or meaningless after creating array mdadm: partition table exists on /dev/xvdh3 but will be lost or meaningless after creating array mdadm: partition table exists on /dev/xvdh4 but will be lost or meaningless after creating array Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. ~ Allocating /dev/md0 as a physical volume. Physical volume "/dev/md0" successfully created ~ Allocating a Volume Group 'mongodb_vg' Volume group "mongodb_vg" successfully created ~ Creating a Logical Volume 'mongodb_lv' Logical volume "mongodb_lv" created ~ Formatting the new volume (/dev/mongodb_vg/mongodb_lv) with EXT4 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 4800512 inodes, 19185664 blocks 959283 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 586 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. ~ Done! Go ahead and mount the new filesystem. Suggested FStab: /dev/mongodb_vg/mongodb_lv /data ext4 defaults,noatime 0 0 

这是df -h的相关输出:

 Filesystem Size Used Avail Use% Mounted on /dev/mapper/mongodb_vg-mongodb_lv 73G 180M 69G 1% /ebsRaid 

这是mdadm --detail /dev/md0的输出

 /dev/md0: Version : 1.2 Creation Time : Wed Feb 29 10:14:39 2012 Raid Level : raid10 Array Size : 157283328 (150.00 GiB 161.06 GB) Used Dev Size : 78641664 (75.00 GiB 80.53 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Feb 29 13:21:49 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : my.site.com:0 (local to host my.site.com) UUID : CENSORED Events : 19 Number Major Minor RaidDevice State 0 202 113 0 active sync /dev/xvdh1 1 202 114 1 active sync /dev/xvdh2 2 202 115 2 active sync /dev/xvdh3 3 202 116 3 active sync /dev/xvdh4 

这是cat /proc/mdstat的输出:

 Personalities : [raid10] md0 : active raid10 xvdh4[3] xvdh3[2] xvdh2[1] xvdh1[0] 157283328 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> 

EDIT1:

这是lvdisplay -m的输出:

  --- Logical volume --- LV Name /dev/mongodb_vg/mongodb_lv VG Name mongodb_vg LV UUID SEpGth-cXd3-ZFhy-XLHo-T5pV-gEd1-Tgancs LV Write Access read/write LV Status available # open 1 LV Size 73.19 GiB Current LE 1171 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 252:0 --- Segments --- Logical extent 0 to 1170: Type linear Physical volume /dev/md0 Physical extents 0 to 1170 

编辑2:

这是vgdisplay的输出:

  --- Volume group --- VG Name mongodb_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 149.94 GiB PE Size 64.00 MiB Total PE 2399 Alloc PE / Size 1171 / 73.19 GiB Free PE / Size 1228 / 76.75 GiB VG UUID CENSORED 

您的卷组不使用为其创build的全部范围:

 VG Size 149.94 GiB PE Size 64.00 MiB Total PE 2399 Alloc PE / Size 1171 / 73.19 GiB Free PE / Size 1228 / 76.75 GiB 

您可以使用以下命令添加更多扩展盘区:

 lvextend -l +100%FREE /dev/mongodb_vg/mongodb_lv /dev/md0 

请input此信息之前,请阅读该页面

此命令将扩展卷组以使用剩余的所有FREE盘区(如果要保留某些扩展盘空闲,也可以select较less的盘区)。 它将利用md0上的范围来做到这一点。

然后您可以使用以下命令在线调整分区大小:

 resize2fs /dev/mongodb_vg/mongodb_lv 

它应该说它是在线resize。 我相信这将解决你的问题,但请阅读手册页,并理解他们在做什么之前尝试。 我不负责你捣毁你的磁盘。

另外,EBS上的RAID和上面的lvm似乎是不必要的虚拟磁盘级别。 您不会通过增加额外的RAID来提高数据的性能和安全性。如果我没有记错的话,LVM会进行镜像/条带化。 尽pipe你在技术上可以在LVM上在RAID上运行LVM上的RAID,但是我不确定你是否因此获得了很多(尽pipe我非常乐意被指出是错误的)。