Linux服务器上的Intel网卡 – 尽pipe绑定端口,但性能有限

我接pipe了一台带有英特尔网卡的Debian 7服务器,其中的端口连接在一起进行负载均衡。 这是硬件:

lspci -vvv | egrep -i 'network|ethernet' 04:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 07:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 07:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 

首先,让我感到困惑的是显示了四个入口,并且系统显示了eth0-eth3(四个端口),即使NIC在规格中只有两个端口。 然而,只有eth2和eth3实际上是起作用的,因此有两个端口:

IP链接显示

 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT qlen 1000 link/ether 00:25:90:19:5c:e4 brd ff:ff:ff:ff:ff:ff 3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT qlen 1000 link/ether 00:25:90:19:5c:e7 brd ff:ff:ff:ff:ff:ff 4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:19:5c:e6 brd ff:ff:ff:ff:ff:ff 5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:19:5c:e5 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 00:25:90:19:5c:e6 brd ff:ff:ff:ff:ff:ff 

问题是我的速度低于预期。 运行iperf的两个实例(每个端口一个)时,我只能得到每端口942 MBit / s,471 MBit / s的组合速度。 我希望更多,因为每个端口可以做1 Gbps的! 为什么 – 绑定没有configuration为最大性能?

 [ 3] local xx.xxx.xxx.xxx port 60868 connected with xx.xxx.xxx.xxx port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-180.0 sec 9.87 GBytes 471 Mbits/sec [ 3] local xx.xxx.xxx.xxx port 49363 connected with xx.xxx.xxx.xxx port 5002 [ ID] Interval Transfer Bandwidth [ 3] 0.0-180.0 sec 9.87 GBytes 471 Mbits/sec 

在/ etc / network / interfaces中绑定configuration:

 auto bond0 iface bond0 inet static address xx.xxx.xxx.x netmask 255.255.255.0 network xx.xxx.xxx.x broadcast xx.xxx.xxx.xxx gateway xx.xxx.xxx.x up /sbin/ifenslave bond0 eth0 eth1 eth2 eth3 down /sbin/ifenslave -d bond0 eth0 eth1 eth2 eth3 

configuration的债券模式是:

cat / proc / net / bonding / bond0

  Bonding Mode: transmit load balancing 

输出ifconfig:

 bond0 Link encap:Ethernet HWaddr 00:25:90:19:5c:e6 inet addr:xx.xxx.xxx.9 Bcast:xx.xxx.xxx.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe19:5ce6/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:19136117104 errors:30 dropped:232491338 overruns:0 frame:15 TX packets:19689527247 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:20530968684525 (18.6 TiB) TX bytes:17678982525347 (16.0 TiB) eth0 Link encap:Ethernet HWaddr 00:25:90:19:5c:e4 UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1 RX packets:235903464 errors:0 dropped:0 overruns:0 frame:0 TX packets:153535554 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:202899148983 (188.9 GiB) TX bytes:173442571769 (161.5 GiB) Memory:fafe0000-fb000000 eth1 Link encap:Ethernet HWaddr 00:25:90:19:5c:e7 UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1 RX packets:3295412 errors:0 dropped:3276992 overruns:0 frame:0 TX packets:152777329 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:213880307 (203.9 MiB) TX bytes:172760941087 (160.8 GiB) Memory:faf60000-faf80000 eth2 Link encap:Ethernet HWaddr 00:25:90:19:5c:e6 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:18667703388 errors:30 dropped:37 overruns:0 frame:15 TX packets:9704053069 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20314102256898 (18.4 TiB) TX bytes:8672061985928 (7.8 TiB) Memory:faee0000-faf00000 eth3 Link encap:Ethernet HWaddr 00:25:90:19:5c:e5 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:229214840 errors:0 dropped:229214309 overruns:0 frame:0 TX packets:9679161295 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13753398337 (12.8 GiB) TX bytes:8660717026563 (7.8 TiB) Memory:fae60000-fae80000 

编辑:我find了答案,感谢下面提出的观点。 系统运行在绑定模式5(TLB)中,为了获得双倍速度,它必须以绑定模式4(IEEE 802.3addynamic链路聚合)运行。 谢谢!

如果你只知道两个端口,只有两个端口,那么你应该:

  1. 找出你目前看不到的其他两个端口发生了什么事。 也许他们被集成在服务器主板上,甚至连接到你不希望的东西。
  2. 只有在你的软件networkingconfiguration中,你将物理连接到同一个交换机上。

处理好之后,您可以更充分地find识别性能问题所需的信息。