堆叠站点上的DRBD磁盘drbd10上的I / O很高

我们有4个Redhat盒Dell PowerEdge R630(比如a,b,c,d),它们具有以下操作系统/软件包。

RedHat EL 6.5 MySql Enterprise 5.6 DRBD 8.4 Corosync 1.4.7

我们已经设置了4路堆叠的drbd资源,如下所示:

集群集-1:服务器a和b互相连接本地局域网群集群集-2:服务器c和d

群集群集1和群集2通过虚拟IP通过堆叠的drbd连接,是不同数据中心的一部分。

drbd0磁盘已在每个服务器1GB本地创build,并且还连接到drbd10。

整体设置由4层组成:Tomcat前端应用程序 – > rabbitmq – > memcache – > mysql / drbd

我们正在经历很高的磁盘IO,甚至到现在还没有活动。 但交通/活动会在几个星期内增加,所以我们担心会对业绩造成非常不好的影响。 I / O Useage仅在堆叠的站点(有时为90%及以上)上走高。 二级站点没有这个问题。当应用程序是理想的时候使用率会很高。

所以,请分享一些build议/调整指导方针,以帮助我们解决问题。

resource clusterdb { protocol C; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergencyshutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; } startup { degr-wfc-timeout 120; # 2 minutes. outdated-wfc-timeout 2; # 2 seconds. } disk { on-io-error detach; no-disk-barrier; no-md-flushes; } net { cram-hmac-alg "sha1"; shared-secret "clusterdb"; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } syncer { rate 10M; al-extents 257; on-no-data-accessible io-error; } on sever-1 { device /dev/drbd0; disk /dev/sda2; address 10.170.26.28:7788; meta-disk internal; } on ever-2 { device /dev/drbd0; disk /dev/sda2; address 10.170.26.27:7788; meta-disk internal; } } 

堆叠configuration: –

  resource clusterdb_stacked { protocol A; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergencyshutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; } startup { degr-wfc-timeout 120; # 2 minutes. outdated-wfc-timeout 2; # 2 seconds. } disk { on-io-error detach; no-disk-barrier; no-md-flushes; } net { cram-hmac-alg "sha1"; shared-secret "clusterdb"; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } syncer { rate 10M; al-extents 257; on-no-data-accessible io-error; } stacked-on-top-of clusterdb { device /dev/drbd10; address 10.170.26.28:7788; } stacked-on-top-of clusterdb_DR { device /dev/drbd10; address 10.170.26.60:7788; } } 

要求的数据: –

 Date || svctm(w_wait)|| %util 10:32:01 3.07 55.23 94.11 10:33:01 3.29 50.75 96.27 10:34:01 2.82 41.44 96.15 10:35:01 3.01 72.30 96.86 10:36:01 4.52 40.41 94.24 10:37:01 3.80 50.42 83.86 10:38:01 3.03 72.54 97.17 10:39:01 4.96 37.08 89.45 10:41:01 3.55 66.48 70.19 10:45:01 2.91 53.70 89.57 10:46:01 2.98 49.49 94.73 10:55:01 3.01 48.38 93.70 10:56:01 2.98 43.47 97.26 11:05:01 2.80 61.84 86.93 11:06:01 2.67 43.35 96.89 11:07:01 2.68 37.67 95.41 

根据评论更新问题: –

实际上比较本地和堆叠。

在本地服务器之间

 [root@pri-site-valsql-a]#ping pri-site-valsql-b PING pri-site-valsql-b.csn.infra.sm (10.170.24.23) 56(84) bytes of data. 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=1 ttl=64 time=0.143 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=2 ttl=64 time=0.145 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=3 ttl=64 time=0.132 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=4 ttl=64 time=0.145 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=5 ttl=64 time=0.150 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=6 ttl=64 time=0.145 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=7 ttl=64 time=0.132 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=8 ttl=64 time=0.127 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=9 ttl=64 time=0.134 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=10 ttl=64 time=0.149 ms 64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=11 ttl=64 time=0.147 ms ^C --- pri-site-valsql-b.csn.infra.sm ping statistics --- 11 packets transmitted, 11 received, 0% packet loss, time 10323ms rtt min/avg/max/mdev = 0.127/0.140/0.150/0.016 ms 

两个堆叠的服务器之间

 [root@pri-site-valsql-a]#ping dr-site-valsql-b PING dr-site-valsql-b.csn.infra.sm (10.170.24.48) 56(84) bytes of data. 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=1 ttl=64 time=9.68 ms 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=2 ttl=64 time=4.51 ms 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=3 ttl=64 time=4.53 ms 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=4 ttl=64 time=4.51 ms 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=5 ttl=64 time=4.51 ms 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=6 ttl=64 time=4.52 ms 64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=7 ttl=64 time=4.52 ms ^C --- dr-site-valsql-b.csn.infra.sm ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6654ms rtt min/avg/max/mdev = 4.510/5.258/9.686/1.808 ms [root@pri-site-valsql-a]# 

显示高I / O的输出: –

 Device: rrqm/s wrqm/sr/sw/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util drbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.06 0.00 0.00 99.94 Device: rrqm/s wrqm/sr/sw/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util drbd0 0.00 0.00 0.00 2.00 0.00 16.00 8.00 0.90 1.50 452.25 90.45 avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.13 0.50 0.00 99.12 Device: rrqm/s wrqm/sr/sw/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util drbd0 0.00 0.00 1.00 44.00 8.00 352.00 8.00 1.07 2.90 18.48 83.15 avg-cpu: %user %nice %system %iowait %steal %idle 0.13 0.00 0.06 0.25 0.00 99.56 Device: rrqm/s wrqm/sr/sw/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util drbd0 0.00 0.00 0.00 31.00 0.00 248.00 8.00 1.01 2.42 27.00 83.70 avg-cpu: %user %nice %system %iowait %steal %idle 0.19 0.00 0.06 0.00 0.00 99.75 Device: rrqm/s wrqm/sr/sw/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util drbd0 0.00 0.00 0.00 2.00 0.00 16.00 8.00 0.32 1.50 162.25 32.45 

编辑的属性文件。但仍然没有运气

 disk { on-io-error detach; no-disk-barrier; no-disk-flushes; no-md-flushes; c-plan-ahead 0; c-fill-target 24M; c-min-rate 80M; c-max-rate 300M; al-extents 3833; } net { cram-hmac-alg "sha1"; shared-secret "clusterdb"; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; max-epoch-size 20000; max-buffers 20000; unplug-watermark 16; } syncer { rate 100M; on-no-data-accessible io-error; } 

我在configuration中看不到堆栈资源。 你也没有提到任何版本号,但是看到al-extents如此之低使我觉得你正在运行古老的东西(8.3.x)或遵循一些非常古老的指示。

无论如何,假设你使用堆栈设备的复制协议A(asynchronous),当IO高峰时,你仍然要快速填充你的TCP发送缓冲区,因此当缓冲区刷新时, DRBD需要将其复制的写入放在某处,并且只能有很多未被确认的复制写入。

IO等待有助于系统负载。 如果您暂时断开堆叠的资源,系统负载是否解决? 这将是validation这是问题的一种方法。 你也可以用netstat或ss等东西来看看你的TCP缓冲区,看看你的负载很高时它们有多满。

除非你的网站之间的连接的延迟和吞吐量是惊人的(暗光纤,或者什么的),你可能需要/希望利用从LINBIT DRBD代理; 让我们使用系统内存来缓冲写入。