4. OpenStack Networking before Neutron - Refresher
§ Nova has its own networking service –
nova-network. It was used before Neutron
§ Nova-network is still present today,
and can be used instead of Neutron
§ Nova-network does § base L2 network provisioning
through Linux Bridge (brctl)
§ IP Address management for
Tenants (in SQL DB)
nova-console
(vnc/vmrc)
nova-api
(OS,EC2,Admin)
nova-compute
nova-cert
Libvirt, XenAPI, etc.
Nova
DB
Hypervisor
(KVM, Xen,
etc.)
Queue
novaconsoleauth
nova-metadata
nova-scheduler
§ configure DHCP and DNS
entries in dnsmasq
§ configure fw-policies and NAT
in IPTables (nova-compute)
§ Calls to network services are
done through the nova API
nova-volume
novanetwork
Volume-Provider
(iSCSI, LVM, etc.)
Network-Providers
(Linux-Bridge or OVS with
brcompat, dnsmasq, IPTables)
§ Nova-network only knows 3 basic Network-Models;
§ Flat & Flat DHCP – direct bridging of Instance to external eth. Interface
with and w/o DHCP
§ VLAN based – Every tenant gets a VLAN, DHCP enabled
4
Inspired by
5. Nova-Networking deployment modes - Flat
§ In flat mode all VMs are patched into the same bridge (normally the Linux Bridge)
§ All VM Traffic is directly bridged onto the physical transport network (or single VLAN)
(aka as ‘fixed network’)
§ DHCP and Default Gateway is provided externally, and is not done using OpenStack
components
§ All VMs in a project are bridged to the same network, there is no multi-tenancy
beside security groups (IPTables between VM interfaces and bridge)
Compute Node
Compute Node
Compute Node
nova-compute
nova-compute
nova-compute
hypervisor
hypervisor
hypervisor
IP Stack
Management
Network
(or VLAN)
5
Bridge 100
Transport
Network
(or VLAN)
VM VM
VM VM
VM VM
IP Stack
Bridge 100
WAN/
Internet
IP Stack
Bridge 100
DHCP Server
6. Nova-Networking deployment modes – Flat / DHCP
§ As in flat mode all VMs are patched into the same bridge and all VM traffic is directly
bridged onto the physical transport network (or single VLAN) – (aka as ‘fixed network’)
§ DHCP and Default Gateway is provided by OpenStack Networking – Through
‘dnsmasq’ (DHCP) and iptables/routing stack + NAT / floating-ip’s
§ All VMs in a project are bridged to the same network, there is no multi-tenancy beside
security groups (IPTables between VM interfaces and bridge)
Compute Node
+ Networking *
nova-netw. dnsmasq nova-compute
NAT &
floating
-IPs
WAN/
Internet
iptables/
routing
IP Stack
hypervisor
Compute Node
nova-compute
nova-compute
hypervisor
hypervisor
VM VM
VM VM
VM VM
Bridge 100
Compute Node
IP Stack
Bridge 100
IP Stack
Bridge 100
External
Network
(or VLAN)
* With ‘multi-host’, each compute node will also be a networking node
6
Internal
Network
(or VLAN)
7. Nova-Networking deployment modes – VLAN
§ Other than with the flat modes, each project has its own network that maps to a VLAN and
bridge that needs to be pre-configured on the physical network
§ VM Traffic is bridged through one bridge and VLAN per project onto the physical network
§ DHCP and Default Gateway is provided by OpenStack Networking – Through
‘dnsmasq’ (DHCP) and iptables/routing stack + NAT / floating-ip’s
Compute Node
+ Networking *
nova-netw. dnsmasq
dnsmasq nova-compute
NAT &
floating
-IPs
WAN/
Internet
iptables/
routing
IP Stack
External
Network
(or VLAN)
* With ‘multi-host’,
each compute node will also be a networking node
7
nova-compute
nova-compute
hypervisor
hypervisor
VM
hypervisor
Bridge 40
VLAN30
VM VM
VM VM
VM
Bridge 30
Compute Node
Compute Node
IP Stack
VLAN40
Br
30
VLAN30
VLAN Trunk
Br
40
IP Stack
VLAN40
Br
30
VLAN30
VLAN Trunk
Br
40
VLAN40
Internal
VLANs
10. Neutron – Open Source OVS Plugin Architecture
§ The following components play a role in the open source OVS Plugin Architecture
§ Neutron-OVS-Agent: Receives tunnel & flow setup information from OVS-Plugin and programs
OVS to build tunnels and to steers traffic into those tunnels
§ Neutron-DHCP-Agent: Sets up dnsmasq in a namespace per configured network/subnet,
and enters mac/ip combination in dnsmasq dhcp lease file
§ Neutron-L3-Agent: Sets up iptables/routing/NAT Tables (routers) as directed by OVS Plugin
§ In most cases GRE overlay tunnels
are used, but flat and vlan modes
are also possible
NeutronNetwork-Node
N.-L3-Agent
NAT &
floating
-IPs
WAN/
Internet
N.-DHCP-Agent
iptables/
routing
iptables/
routing
N.-OVS-Agent
dnsmasq
dnsmasq
ovsdb/
ovsvsd
Compute Node
nova-compute
Neutron-Server + OVS-Plugin
Compute Node
nova-compute
ovsdb/
ovsvsd
hypervisor
External
Network
(or VLAN)
hypervisor
VM VM
br-int
br-tun
br-tun
br-tun
IP Stack
IP Stack
L2 in L3 (GRE)
Tunnel
Layer 3 Transport Network
10
VM VM
ovsdb/
ovsvsd
br-int
br-int
br-ex
IP Stack
N.-OVS-Agent
N.-OVS-Agent
Layer 3 Transport Net.
11. Open Source OVS Plugin / VMware NSX Plugin differences
§ With the VMware NSX Plugin (aka NVP Plugin) the following services are replaced by
VMware NSX components
§ OVS-Plugin: The OVS Plugin is exchanged by the NVP-Plugin
§ Neutron-OVS-Agent: Instead of the OVS-Agent, a centralized NVP controller cluster is used
§ Neutron-L3-Agent: Instead of the L3-Agent, a scale out cluster of NVP Layer3 Gateways is used
§ IPTables/Ebtables: Security is provided by native OpenVSwitch methods, controlled by the NVPController Cluster
NeutronCompute Node
Compute Node
§ GRE Tunneling is exchanged
Network-Node
with the more performing
STT technology
Neutron-Server + OVS NVP-Plugin
N.-L3-Agent
NAT &
floating
-IPs
WAN/
Internet
N.-DHCP-Agent
iptables/
routing
iptables/
routing
N.-OVS-Agent
dnsmasq
dnsmasq
ovsdb/
ovsvsd
External
Network
(or VLAN)
ovsdb/
ovsvsd
hypervisor
VM VM
br-tun
IP Stack
br-tun
ovsdb/
ovsvsd
hypervisor
VM VM
br-int
br-tun
IP Stack
L2 in L3 (GRE)
Tunnel
Layer 3 Transport Network
11
N.-OVS-Agent
N.-OVS-Agent
br-int
br-int
br-ex
IP Stack
nova-compute
nova-compute
Layer 3 Transport Net.
12. OpenVSwitch with VMware NSX
Transport
Network
MGMT
NSX
Controller
Cluster
eth1
eth0
TCP 6633
OpenFlow
TCP 6632
OVSDB
user
kernel
Linux IP stack + routing table
192.168.10.1
Config/State DB
br-0
Flows & Tunnel
Ports
(to Linux IP
Stack)
ovsdb-server
br-int (flow table)
ovs-vswitchd
WEB
12
WEB
APP
APP
13. Open Source OVS Plugin / VMware NSX Plugin differences
§ Centralized scale-out controller cluster controls all OpenVSwitches in all Compute- and
Network Nodes. It configures the tunnel interfaces and programs the flow tables of OVS
§ NSX L3 Gateway Service (scale-out) is taking over the L3 routing and NAT functions
§ NSX Service-Node relieves the Compute Nodes from the task of replicating broadcast,
unknown unicast and multicast traffic sourced by VMs
§ Security-Groups are implemented natively in OVS, instead of iptables/ebtables
NeutronNetwork-Node
NSX Controller
Cluster
Neutron-Server + NVP-Plugin
Compute Node
Compute Node
nova-compute
nova-compute
N.-DHCP-Agent
hypervisor
hypervisor
ovsdb/
ovsvsd
dnsmasq
dnsmasq
ovsdb/
ovsvsd
br-0
IP Stack
ovsdb/
ovsvsd
VM VM
br-int
br-int
br-int
WAN/
Internet
VM VM
br-0
br-0
IP Stack
IP Stack
Management
Network
NSX L3GW
+ NAT
13
Layer 3 Transport Network
NSX ServiceNode
L2 in L3 (STT) Tunnel
Layer 3 Transport Net.
16. Management & Operations – Software Upgrades
§ Automated deployment
of new Version
§ Built in compatibility
verification
§ Rollback
§ Online Upgrade
(i.e. dataplane &
control plane services
stay up)
16
17. Nova Metadata Service in Folsom
§ Nova-metadata is used to enable the use of cloud-init enabled images
(https://help.ubuntu.com/community/CloudInit)
§ After getting an IP address the Instance contacts the well know IP 169.254.169.254
via HTTP and requests the needed metadata for the Instance
• Some of the things cloud-init configures are:
• setting a default locale,
hostname, etc.
• Set up ephemeral mount points
• Generate ssh private keys, and add ssh
keys to user's .ssh/authorized_keys so
they can log in
§ With neutron in Folsom, the quantum-dhcp-agent will do the following:
§ provides option 121 “classless static routes” - adds a static route to
169.254.169.254 pointing to the dhcp-agent host itself
§ IPTables on the dhcp-agent host NATs the request either to the local metadata
server on the dhcp-agent host, or to a remote metadata service
Novametadata
dhcpagent
Instance
HTTP req. to 169.254.169.254
next-hop = quantum-dhcp-agent IP
in Tennant-Net
Novametadata
NAT to local
nova-metadata ,or
Forward to remote
nova-metadata
§ !! Caveat: In Folsom there is no support for overlapping IPs, and no support of
namespaces if nova-metadata is used. In Grizzly this will change (see next Slide)
17
18. Nova Metadata Service in Grizzly
§ To address the limitations of nova-metadata in Folsom, the Grizzly release introduces two
new services on the network-node;
quantum-ns-metadata-proxy and quantum-metadata-proxy (http://tinyurl.com/a3n4ypl for details)
§ In Grizzly DHCP option 121 is not used
anymore. The L3GW will route the request to
169.254.169.254 to the ns-metadata-proxy
§ The ns-metadata-proxy parses the request
and forwards it internally to the metadataproxy with two new headers: ‘X-Forward-For’
and the ‘X-Quantum-Router-ID’. These
headers provide context to properly identify
the Instance that made the original request.
Only the metadata-proxy can reach hosts on
the management network
§ The metadata-proxy uses the two headers to
retrieve the device-id of the port that sent the
request by interrogating quantum server
Network Node
Node in
management network
Tenant router network
namespace
quantum-nsmetadata-proxy
nova-metadata
via UNIX domain
socket
quantummetadata-proxy
quantum-server
§ The metadata proxy uses the device-id received from quantum-server to construct the
‘X-Instance-id’ header, and sends the request to nova-metadata including this information
§ Nova-metadata then uses the ‘X-Instance-id’ header to identify the tenant, and to properly
service the request
18