带有mdadm和LVM的flashcache

我在使用LVM和mdadm的系统上设置flashcache时遇到了问题,我怀疑我要么丢失了一个显而易见的步骤,要么做出一些错误的映射,希望有人能指出我正确的方向?

系统信息:

CentOS 6.4 64位

mdadmconfiguration

md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] 

pcsvan

 PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] 

使用flashcache创build命令:

 flashcache_create -p back ssdcache /dev/md3 /dev/md2 

pvdisplay将

 --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV 

的vgdisplay

 --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj 

所以这就是我所在的地方,我认为这是完成工作,然而当创build一个称为testing的逻辑卷,并挂载它是/ mnt /testing序列写是可悲的,60 ish MB / s

/ dev / md3在Raid0中有2个SSD,单独在800 MB / s的连续写入下执行,而我试图caching/ dev / md2,这是在raid6中的6个1TB硬盘

我已经阅读了一整天的一些页面,其中一些在这里,从结果显而易见,caching不起作用,但我不确定为什么。

我在lvm.conf中添加了filter行

 filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] 

这可能是愚蠢的,但caching显然不执行写入,所以我怀疑我没有映射它或没有正确地挂载caching。

dmsetup状态

 ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear 

我已经包含尽可能多的信息,我期待着任何答复。

我可以看到/ dev / md2没有被lvm.conf禁用,但它应该。

我想在这样复杂的设置中,您最好明确添加LVM设备并禁用所有其他设备:

 filter = [ "...", "a|/dev/md0|", "a|/dev/md1|", "a|/dev/mapper/ssdcache|", "r|.*|" ] 

另外iostat可以用来监视实际的设备活动。

PS:

我对你疯狂的存储布局非常悲观,在这种布局中,驱动器被分割成许多分区,这些分区参与了许多不同的RAID。

系统 {RAID1 (/dev/ssd1p1+/dev/ssd2p1)}

数据 {RAID10 (6 whole drives) + flashcache on RAID1 (/dev/ssd1p2+/dev/ssd2p2)}

– 更有吸引力:)。

UPD:
甚至更好:

整个SSD上的RAID1:用于Flashcache的系统和分区

整个HDDs + flashcache上的RAID10 / 6