在Ubuntu中诊断数据包丢失/高延迟

我们有一个运行Nginx(1.5.2)的Linux机器(Ubuntu 12.04),它作为一个Tornado和Apache主机的反向代理/负载均衡器。 上游服务器在物理上和逻辑上接近(相同的DC,有时是相同的机架)并且在它们之间显示亚毫秒的延迟:

PING appserver (10.xx.xx.112) 56(84) bytes of data. 64 bytes from appserver (10.xx.xx.112): icmp_req=1 ttl=64 time=0.180 ms 64 bytes from appserver (10.xx.xx.112): icmp_req=2 ttl=64 time=0.165 ms 64 bytes from appserver (10.xx.xx.112): icmp_req=3 ttl=64 time=0.153 ms 

我们每秒接收到大约500个请求的持续负载,而且目前正在从互联网上看到正常的丢包/延迟峰值,即使从基本的ping也是如此:

 sam@AM-KEEN ~> ping -c 1000 loadbalancer PING 50.xx.xx.16 (50.xx.xx.16): 56 data bytes 64 bytes from loadbalancer: icmp_seq=0 ttl=56 time=11.624 ms 64 bytes from loadbalancer: icmp_seq=1 ttl=56 time=10.494 ms ... many packets later ... Request timeout for icmp_seq 2 64 bytes from loadbalancer: icmp_seq=2 ttl=56 time=1536.516 ms 64 bytes from loadbalancer: icmp_seq=3 ttl=56 time=536.907 ms 64 bytes from loadbalancer: icmp_seq=4 ttl=56 time=9.389 ms ... many packets later ... Request timeout for icmp_seq 919 64 bytes from loadbalancer: icmp_seq=918 ttl=56 time=2932.571 ms 64 bytes from loadbalancer: icmp_seq=919 ttl=56 time=1932.174 ms 64 bytes from loadbalancer: icmp_seq=920 ttl=56 time=932.018 ms 64 bytes from loadbalancer: icmp_seq=921 ttl=56 time=6.157 ms --- 50.xx.xx.16 ping statistics --- 1000 packets transmitted, 997 packets received, 0.3% packet loss round-trip min/avg/max/stddev = 5.119/52.712/2932.571/224.629 ms 

模式总是相同的:事情运行良好一段时间(<20ms),然后一个ping完全下降,然后三个或四个高延迟ping(> 1000ms),然后再稳定下来。

stream量通过绑定的公共接口(我们将其称为bond0 )进行configuration:

  bond0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:5d inet addr:50.xx.xx.16 Bcast:50.xx.xx.31 Mask:255.255.255.224 inet6 addr: <ipv6 address> Scope:Global inet6 addr: <ipv6 address> Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:527181270 errors:1 dropped:4 overruns:0 frame:1 TX packets:413335045 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240016223540 (240.0 GB) TX bytes:104301759647 (104.3 GB) 

然后通过HTTP将请求提交给专用networking上的上游服务器(我们可以称之为bond1 ),configuration如下:

  bond1 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:5c inet addr:10.xx.xx.70 Bcast:10.xx.xx.127 Mask:255.255.255.192 inet6 addr: <ipv6 address> Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:430293342 errors:1 dropped:2 overruns:0 frame:1 TX packets:466983986 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:77714410892 (77.7 GB) TX bytes:227349392334 (227.3 GB) 

uname的输出-a:

Linux <hostname> 3.5.0-42-generic #65~precise1-Ubuntu SMP Wed Oct 2 20:57:18 UTC 2013 x86_64 GNU/Linux

我们已经定制了sysctl.conf ,试图解决这个问题,但没有成功。 输出/etc/sysctl.conf (省略不相关的configuration):

 # net: core net.core.netdev_max_backlog = 10000 # net: ipv4 stack net.ipv4.tcp_ecn = 2 net.ipv4.tcp_sack = 1 net.ipv4.tcp_fack = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_max_syn_backlog = 10000 net.ipv4.tcp_congestion_control = cubic net.ipv4.ip_local_port_range = 8000 65535 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_thin_dupack = 1 net.ipv4.tcp_thin_linear_timeouts = 1 net.netfilter.nf_conntrack_max = 99999999 net.netfilter.nf_conntrack_tcp_timeout_established = 300 

输出dmesg -d ,禁止非ICMP UFW消息:

 [508315.349295 < 19.852453>] [UFW BLOCK] IN=bond1 OUT= MAC=<mac addresses> SRC=118.xx.xx.143 DST=50.xx.xx.16 LEN=68 TOS=0x00 PREC=0x00 TTL=51 ID=43221 PROTO=ICMP TYPE=3 CODE=1 [SRC=50.xx.xx.16 DST=118.xx.xx.143 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=10220 DF PROTO=TCP SPT=80 DPT=53817 WINDOW=8190 RES=0x00 ACK FIN URGP=0 ] [517787.732242 < 0.443127>] Peer 190.xx.xx.131:59705/80 unexpectedly shrunk window 1155488866:1155489425 (repaired) 

如何在Debian家族的Linux机器上诊断这个问题的原因?

如果networking中的任何一部分networking已经饱和,或者networking中的任何链路出现错误,则可能会发生数据包丢失。 这不会出现在接口错误计数,除非你恰好在切换到服务器电缆连接有问题。 如果问题在networking中的其他地方,则显示为丢失的数据包。

如果你有TCPstream量,你可以表示这个问题,因为在内核中有跟踪TCP的计数器,采取恢复步骤来计算stream中丢失的数据包。 看看netstat上的-s (stats)选项。 所呈现的值是计数器,所以您需要观察它们一段时间才能了解什么是正常的和什么是exception的,但数据在那里。 retransmitdata loss计数器特别有用。

 [sadadmin@busted ~]$ netstat -s | egrep -i 'loss|retran' 2058 segments retransmited 526 times recovered from packet loss due to SACK data 193 TCP data loss events TCPLostRetransmit: 7 2 timeouts after reno fast retransmit 1 timeouts in loss state 731 fast retransmits 18 forward retransmits 97 retransmits in slow start 4 sack retransmits failed 

有些工具会对这些值进行采样,并为您设定趋势,以便您可以轻松查看何时出现问题。 我用munin 。