在增长后,mdadm raid6arrays在df -h中报告不正确的大小

我最近在Fedora 18中生成了一个5x 3tb mdadm raid6arrays(8tb),第六张光盘,完成重build和检查后,“mdadm –detail / dev / md127”返回如下:

Version : 1.2 Creation Time : Sun Feb 10 22:01:32 2013 Raid Level : raid6 Array Size : 11720534016 (11177.57 GiB 12001.83 GB) Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Jul 21 17:31:32 2013 State : active Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : ubuntu:tercore UUID : f52477e1:ded036fa:95632986:dcb84e51 Events : 326236 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 4 8 49 3 active sync /dev/sdd1 5 8 80 4 active sync /dev/sdf 6 8 64 5 active sync /dev/sde 

都好。

然后我运行了“cat / proc / mdstat”,它返回了下面的内容:

 Personalities : [raid6] [raid5] [raid4] md127 : active raid6 sde[6] sdf[5] sda1[0] sdb1[1] sdd1[4] sdc1[2] 11720534016 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none> 

还好。

但是当运行“df -h”时,我得到下面的错误地报告了旧的raid容量:

 Filesystem Size Used Avail Use% Mounted on devtmpfs 922M 0 922M 0% /dev tmpfs 939M 140K 939M 1% /dev/shm tmpfs 939M 2.6M 936M 1% /run tmpfs 939M 0 939M 0% /sys/fs/cgroup /dev/mapper/fedora_faufnir--hp-root 26G 7.2G 17G 30% / tmpfs 939M 20K 939M 1% /tmp /dev/sdg1 485M 108M 352M 24% /boot /dev/md127 8.2T 7.6T 135G 99% /home/teracore 

任何人都可以帮我解决这个不匹配? 这也自然导致桑巴通过我的Windows笔记本电脑报告不正确的arrays容量。

提前谢谢了! 将。

我想你忘了执行resize2fs命令

 # mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Jul 21 23:50:49 2013 Raid Level : raid6 Array Size : 62914368 (60.00 GiB 64.42 GB) Used Dev Size : 20971456 (20.00 GiB 21.47 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jul 22 00:04:43 2013 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : c0a5733d:46d5dd5e:b24ac321:6c547228 Events : 0.13992 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 80 4 active sync /dev/sdf # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 39G 1.6G 35G 5% / /dev/sda1 494M 23M 446M 5% /boot tmpfs 500M 0 500M 0% /dev/shm /dev/md0 60G 188M 59G 1% /raid6 # resize2fs /dev/md0 resize2fs 1.39 (29-May-2006) Filesystem at /dev/md0 is mounted on /raid6; on-line resizing required Performing an on-line resize of /dev/md0 to 20971456 (4k) blocks. The filesystem on /dev/md0 is now 20971456 blocks long. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 39G 1.6G 35G 5% / /dev/sda1 494M 23M 446M 5% /boot tmpfs 500M 0 500M 0% /dev/shm /dev/md0 79G 192M 79G 1% /raid6 # mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Jul 21 23:50:49 2013 Raid Level : raid6 Array Size : 83885824 (80.00 GiB 85.90 GB) Used Dev Size : 20971456 (20.00 GiB 21.47 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jul 22 00:04:43 2013 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : c0a5733d:46d5dd5e:b24ac321:6c547228 Events : 0.13992 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 80 4 active sync /dev/sdf 5 8 96 5 active sync /dev/sdg 

PS我会build议在resize操作之前卸载/ dev / md127