使用Dell PowerConnect 5324的NIC绑定/平衡-rr

我试图让NIC绑定与balance-rr一起工作,以便将三个NIC端口组合起来,以便获得1 Gbps而不是1 Gbps。 我们在连接到同一交换机的两台服务器上这样做。 但是,我们只能获得一个物理链接的速度。

我们使用1个Dell PowerConnect 5324,SW版本2.0.1.3,引导版本1.0.2.02,硬件版本00.00.02。 两台服务器都是运行OnApp Hypervisor(CloudBoot)的CentOS 5.9(Final)

服务器1正在使用端口通道1中的端口g5-g7。服务器2正在使用端口通道2中的端口g9-g11。

开关

show interface status

 Port Type Duplex Speed Neg ctrl State Pressure Mode -------- ------------ ------ ----- -------- ---- ----------- -------- ------- g1 1G-Copper -- -- -- -- Down -- -- g2 1G-Copper Full 1000 Enabled Off Up Disabled Off g3 1G-Copper -- -- -- -- Down -- -- g4 1G-Copper -- -- -- -- Down -- -- g5 1G-Copper Full 1000 Enabled Off Up Disabled Off g6 1G-Copper Full 1000 Enabled Off Up Disabled Off g7 1G-Copper Full 1000 Enabled Off Up Disabled On g8 1G-Copper Full 1000 Enabled Off Up Disabled Off g9 1G-Copper Full 1000 Enabled Off Up Disabled On g10 1G-Copper Full 1000 Enabled Off Up Disabled On g11 1G-Copper Full 1000 Enabled Off Up Disabled Off g12 1G-Copper Full 1000 Enabled Off Up Disabled On g13 1G-Copper -- -- -- -- Down -- -- g14 1G-Copper -- -- -- -- Down -- -- g15 1G-Copper -- -- -- -- Down -- -- g16 1G-Copper -- -- -- -- Down -- -- g17 1G-Copper -- -- -- -- Down -- -- g18 1G-Copper -- -- -- -- Down -- -- g19 1G-Copper -- -- -- -- Down -- -- g20 1G-Copper -- -- -- -- Down -- -- g21 1G-Combo-C -- -- -- -- Down -- -- g22 1G-Combo-C -- -- -- -- Down -- -- g23 1G-Combo-C -- -- -- -- Down -- -- g24 1G-Combo-C Full 100 Enabled Off Up Disabled On Flow Link Ch Type Duplex Speed Neg control State -------- ------- ------ ----- -------- ------- ----------- ch1 1G Full 1000 Enabled Off Up ch2 1G Full 1000 Enabled Off Up ch3 -- -- -- -- -- Not Present ch4 -- -- -- -- -- Not Present ch5 -- -- -- -- -- Not Present ch6 -- -- -- -- -- Not Present ch7 -- -- -- -- -- Not Present ch8 -- -- -- -- -- Not Present 

服务器1:

cat /etc/sysconfig/network-scripts/ifcfg-eth3

 DEVICE=eth3 HWADDR=00:1b:21:ac:d5:55 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes 

cat /etc/sysconfig/network-scripts/ifcfg-eth4

 DEVICE=eth4 HWADDR=68:05:ca:18:28:ae USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes 

cat /etc/sysconfig/network-scripts/ifcfg-eth5

 DEVICE=eth5 HWADDR=68:05:ca:18:28:af USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes 

cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond

 DEVICE=onappstorebond IPADDR=10.200.52.1 NETMASK=255.255.0.0 GATEWAY=10.200.2.254 NETWORK=10.200.0.0 USERCTL=no BOOTPROTO=none ONBOOT=yes 

cat /proc/net/bonding/onappstorebond

 Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:21:ac:d5:55 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:28:ae Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:28:af 

服务器2:

cat /etc/sysconfig/network-scripts/ifcfg-eth3

 DEVICE=eth3 HWADDR=00:1b:21:ac:d5:a7 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes 

cat /etc/sysconfig/network-scripts/ifcfg-eth4

 DEVICE=eth4 HWADDR=68:05:ca:18:30:30 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes 

cat /etc/sysconfig/network-scripts/ifcfg-eth5

 DEVICE=eth5 HWADDR=68:05:ca:18:30:31 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes 

cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond

 DEVICE=onappstorebond IPADDR=10.200.53.1 NETMASK=255.255.0.0 GATEWAY=10.200.3.254 NETWORK=10.200.0.0 USERCTL=no BOOTPROTO=none ONBOOT=yes 

cat /proc/net/bonding/onappstorebond

 Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:21:ac:d5:a7 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:30:30 Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:30:31 

这是iperf的结果。

 ------------------------------------------------------------ Client connecting to 10.200.52.1, TCP port 5001 TCP window size: 27.7 KByte (default) ------------------------------------------------------------ [ 3] local 10.200.3.254 port 53766 connected with 10.200.52.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 950 MBytes 794 Mbits/sec 

从交换机到系统的input负载均衡由交换机控制。

你可能有3Gbps的无序TCP传输,但只有1Gbps的接收,因为交换机只发送一个从机。

你没有得到完整的1Gbps,因为balance-rr经常导致乱序的TCPstream量,所以TCP超时工作来重新定购你的iperfstream。

根据我的经验,实际上不可能负载平衡一个单一的TCPstream可靠。

正确configuration的绑定使您可以在正确的条件下获得从机带宽的总吞吐量 ,但最大吞吐量是一个从机的最大速度。

我个人会使用模式2(在交换机上有EtherChannel)或模式4(在交换机上有LACP)。

如果您需要的速度超过1Gbps,则需要更快的网卡。