6. network: bridge: on the host
6
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "df8e30242635b2...",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]
inter-container-communications is
enabled by default
it will masquerade all traffic
going out of the bridge
7. network: bridge: as seen by the host
7
$ ifconfig docker0
docker0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:a2ff:fe10:ccf7/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:480 (480.0 B) TX bytes:5025 (5.0 KB)
$
$
$ ip route
default via 192.168.1.1 dev wlan0 proto static metric 600
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
...
route for traffic from host to containers
docker0 is by default at 172.17.0.1
8. network: bridge: as seen by the container
8
$ docker run --rm --name container-02 -ti busybox /bin/sh
/ #
/ # ifconfig
eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:40 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6558 (6.4 KiB) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # exit
13. network: multi-host
13
Solution 1: create a common IP space (at container level), with
routing and probably NAT
● assign a /24 subnet to each host
● setup IP routes between subnets
● make sure the gateway is through the host’s eth0.
Calico FlannelRomana ...
drivers available:
17. network: multi-host: overlay
17
Solution 2: create an overlay network
● create a parallel network for cross communication
● connect hosts with encapsulation tunnels
● connect containers to the virtual networks
something
like a VPN
Docker
(natively)
FlannelWeave ...
drivers available:
19. network: multi-host: overlay
19
Docker
(natively)
FlannelWeave ...
● VXLAN
● Propietary (UDP)
● VXLAN
● Propietary (UDP)
● VXLAN
VXLAN faster than
UDP: traffic does
not go up to user-
space
some hardware
acceleration
support for
VXLAN...
but UDP can add
encryption easily...
26. network: overlay vs routing
26
Good Bad
Routing
● Native performance
● Security (*)
● Control over the
infrastructure
● VPN between clouds
● Can run out of IP
addresses
Overlay
● Better inter-cloud
● Encrypted traffic (*)
● Lower performance
● IP multicast
(*) depends on the driver