无法在Proxmox的openvz容器上同时使用私有和公共ipnetworking

我正在使用Proxmox 3,这是一个全新的安装。 对于那些知道,我使用OVH Vrack 1.5(以及以前的Vrack 1.0)。

我的服务器有两个接口eth0和eth1,我成功地在主机节点上configuration了私有和公有ip,并且能够ping通vlan上的所有服务器。

现在,我创build了一个OpenVZ容器,并在Proxmox GUI(简单venet)中分配了公共和私有IP。

假设我使用172.16.0.129作为内部networking。

一旦我在容器中login,我能够成功地ping所有我的私人networking,但我无法达到任何公共IP。

这是主机节点configuration:

ifconfig

 dummy0 Link encap:Ethernet HWaddr 8a:ee:41:c1:ec:53 inet6 addr: fe80::84ed:41ff:fec1:ec53/64 Scope:Link UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:29 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:1950 (1.9 KiB) eth0 Link encap:Ethernet HWaddr 00:32:90:a7:43:48 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:111570 errors:0 dropped:0 overruns:0 frame:0 TX packets:58220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:140197486 (133.7 MiB) TX bytes:8647245 (8.2 MiB) eth1 Link encap:Ethernet HWaddr 00:25:90:54:43:49 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:421 errors:0 dropped:0 overruns:0 frame:0 TX packets:93 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:43258 (42.2 KiB) TX bytes:6322 (6.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3879 errors:0 dropped:0 overruns:0 frame:0 TX packets:3879 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2507778 (2.3 MiB) TX bytes:2507778 (2.3 MiB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:49 errors:0 dropped:0 overruns:0 frame:0 TX packets:28 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3535 (3.4 KiB) TX bytes:2236 (2.1 KiB) vmbr0 Link encap:Ethernet HWaddr 00:25:90:a7:43:48 inet addr:5.135.14.28 Bcast:5.135.14.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:103047 errors:0 dropped:0 overruns:0 frame:0 TX packets:54482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:137374926 (131.0 MiB) TX bytes:6823790 (6.5 MiB) vmbr1 Link encap:Ethernet HWaddr 86:ed:41:c1:ec:53 inet6 addr: fe80::84ed:41ff:fec1:ec53/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:578 (578.0 B) vmbr2 Link encap:Ethernet HWaddr 00:25:90:a7:43:49 inet addr:172.16.0.128 Bcast:172.31.255.255 Mask:255.240.0.0 inet6 addr: fe80::225:90ff:fea7:4349/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:349 errors:0 dropped:0 overruns:0 frame:0 TX packets:69 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:30789 (30.0 KiB) TX bytes:4794 (4.6 KiB) 

interfaces

 auto lo iface lo inet loopback # for Routing auto vmbr1 iface vmbr1 inet manual post-up /etc/pve/kvm-networking.sh bridge_ports dummy0 bridge_stp off bridge_fd 0 # vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you. auto vmbr0 iface vmbr0 inet static address 5.135.14.28 netmask 255.255.255.0 network 5.135.14.0 broadcast 5.135.14.255 gateway 5.135.14.254 bridge_ports eth0 bridge_stp off bridge_fd 0 # bridge vrack 1.5 auto vmbr2 iface vmbr2 inet static address 172.16.0.128 netmask 255.240.0.0 broadcast 172.31.255.255 gateway 172.31.255.254 bridge_ports eth1 bridge_stp off bridge_fd 0 

和路由表:

 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.16.0.129 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 4.1.5.13 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 5.135.14.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 vmbr2 0.0.0.0 5.135.14.254 0.0.0.0 UG 0 0 0 vmbr0 

容器路由表如下:

 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0 

ifconfig

 venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 PtP:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:21 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:252 (252.0 B) TX bytes:1594 (1.5 KB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:172.16.0.129 PtP:172.16.0.129 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:4.1.5.173 PtP:4.1.5.173 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 

恢复:

  • 从主机我能够达到公共和私人networking。
  • 从容器我能够达到公共私人networking取决于我分配IP地址的顺序。

我比较了一些现有的Proxmoxconfiguration,它工作的很好,但我无法find任何差异。

任何帮助将不胜感激。 谢谢。

在Proxmox网站上总结相关的论坛话题 – http://forum.proxmox.com/threads/5008-Network-issue-setting-up-two-networks-(OpenVZ-container)

您需要使用VETH(桥接)networking,而不是默认的venet(路由)networking。

通过Proxmox gui创build2个桥接接口(每个网桥接口/networking一个接口),然后您可以在容器内部configuration2个networking接口,就像其他types的服务器一样,每个networking1个

有关venet与veth之间差异的更多信息,请参阅openvz wiki – http://openvz.org/Differences_between_venet_and_veth