我已经在RAID控制器后面安装了一些带有一些相当快的HDD的Solaris Express 11机器,将设备设置为启用压缩的zpool,并添加了镜像日志和2个caching设备。 数据集暴露为FC目标以供ESX使用,并且我已经用一些数据填充它来玩弄。 L2ARC部分填满(由于某种原因,不再填充),但我几乎看不到有任何用处。 zpool iostat -v显示过去并没有从caching中读取太多内容:
tank 222G 1.96T 189 84 994K 1.95M c7t0d0s0 222G 1.96T 189 82 994K 1.91M mirror 49.5M 5.51G 0 2 0 33.2K c8t2d0p1 - - 0 2 0 33.3K c8t3d0p1 - - 0 2 0 33.3K cache - - - - - - c11d0p2 23.5G 60.4G 2 1 33.7K 113K c10d0p2 23.4G 60.4G 2 1 34.2K 113K
并且支持L2ARC的arcstat.pl脚本显示当前工作负载的L2ARC 100%未命中:
./arcstat.pl -f read,hits,miss,hit%,l2read,l2hits,l2miss,l2hit%,arcsz,l2size 5 read hits miss hit% l2read l2hits l2miss l2hit% arcsz l2size [...] 243 107 136 44 136 0 136 0 886M 39G 282 144 137 51 137 0 137 0 886M 39G 454 239 214 52 214 0 214 0 889M 39G [...]
我首先怀疑它可能是logging太大的影响,以便L2ARC认识到所有的东西都是stream媒体负载,但是zpool只包含zfs卷(我使用zfs create -V 500G -s <datasetname> “稀疏” zfs create -V 500G -s <datasetname> )甚至没有要更改的logging集参数。
我还发现许多关于L2ARC需要每个logging需要200字节的RAM作为其元数据的概念,但是到目前为止还无法findL2ARC将会考虑具有卷数据集的“logging” – 512字节的单个扇区? 可能是因为元数据缺乏RAM而遭受损失,到目前为止已经被填满了再也读不到的垃圾?
编辑:添加8 GB的RAM在安装的2 GB的艾伦的顶部很好 – 额外的RAM很高兴地使用,即使在32位安装和L2ARC现在已经增长,正在受到打击:
time read hit% l2hit% arcsz l2size 21:43:38 340 97 13 6.4G 95G 21:43:48 185 97 18 6.4G 95G 21:43:58 655 91 2 6.4G 95G 21:44:08 432 98 16 6.4G 95G 21:44:18 778 92 9 6.4G 95G 21:44:28 910 99 19 6.4G 95G 21:44:38 4.6K 99 18 6.4G 95G
感谢ewwhite 。
系统中应该有更多的RAM。 指向L2ARC的指针需要保存在RAM(ARC)中,所以我认为你需要大约4GB或6GB的RAM来更好地利用你有的〜60GB的L2ARC。
这是来自ZFS列表中最近的一个线程:
http://opensolaris.org/jive/thread.jspa?threadID=131296
L2ARC is "secondary" ARC. ZFS attempts to cache all reads in the ARC (Adaptive Read Cache) - should it find that it doesn't have enough space in the ARC (which is RAM-resident), it will evict some data over to the L2ARC (which in turn will simply dump the least-recently-used data when it runs out of space). Remember, however, every time something gets written to the L2ARC, a little bit of space is taken up in the ARC itself (a pointer to the L2ARC entry needs to be kept in ARC). So, it's not possible to have a giant L2ARC and tiny ARC. As a rule of thumb, I try not to have my L2ARC exceed my main RAM by more than 10-15x (with really bigMem machines, I'm a bit looser and allow 20-25x or so, but still...). So, if you are thinking of getting a 160GB SSD, it would be wise to go for at minimum 8GB of RAM. Once again, the amount of ARC space reserved for a L2ARC entry is fixed, and independent of the actual block size stored in L2ARC. The jist of this is that tiny files eat up a disproportionate amount of systems resources for their size (smaller size = larger % overhead vis-a-vis large files).