2. What Is OpenStack Neutron?
Neutron’s Mission:
To implement services and associated libraries
to provide on-demand, scalable, and
technology-agnostic network abstraction.
4. Neutron: A bit of Quantum History
● April 2011: Interested parties converge to create a
common networking API for OpenStack with the moniker
Quantum
● September 2012: Quantum a part of Folsom Release
● October 2013: Quantum renamed to Neutron
● May 2015: Neutron rankings for Kilo release
● #1 for reviews
● #1 for resolved bugs
● #2 for patchsets
● #2 in email volume
● #3 in commits
● #4 in lines of code
5. Neutron Deployments
● According to the spring 2015 user survey results:
● 76% of production installations are on Neutron vs. nova-network
● OVS at 46% of production installs (up 3%)
● Linuxbridge at 19% of production installs (up 4%)
● nova-network production usage dropped from 30% to 24%
7. Neutron Kilo Release: By the Numbers
● 45 blueprints completed
● 544 bugs closed
● Advanced services split into separate git repositories and
release tarballs
● Plugin decomposition effort started resulting in 10+
plugin/driver decomposition efforts
9. Plugin Decomposition
● Addresses pain points: review time, iteration speed,
easier to use vendor specific modules
● Move to thin in-tree plugins and drivers, with plugin and
driver functionality maintained outside of Neutron
● Allows for fast iteration for both core Neutron as well as
plugins and drivers
10. Advanced Services Split
● Migrate out LBaaS, VPNaaS, and FWaaS into separate
git repositories
● Allow operators the flexibility of running the services they
want to offer their tenants
● Allow the services teams the chance to iterate quickly
outside the scope of core neutron
● Reduce gate testing complexity
● Optimize core parts of Neutron into a library
13. Speed and Reliability Improvements
● Agent Child Process Status: Monitors agents and restarts
them when they exit
● Rootwrap Daemon Mode: High performance access to
root for commands run by Neutron agents
14. IPv6
● IPv6 networks are well-supported
● No distributed routing for IPv6
● No floating IPs for IPv6
● Creates a bit of a problem for “bring your own address”
15. Subnet Pools
● Solution to “bring your own addresses”
● Manages allocation of addresses to tenants
● Prevents duplication of addresses
17. Neutron Liberty Release: By the Numbers
● 35 blueprints targeted
● 522 bugs targeted
● Plugin decomposition effort continuing resulting in most
drivers and plugins being out of tree now
18. Neutron Stadium
● In accordance with the “Big Tent” OpenStack governance
model, Neutron has also changed its governance model
● Allowing plugin backends to re-enter Neutron via the
Stadium as their own gerrit repositories
● Growing the ecosystem under Neutron as a platform
19. Neutron Governance Changes
● New Lieutenant Model allows scaling core reviewers
● New process for defining work (Request For
Enhancement or RFE) allows for streamlining the way
work is proposed
20. Plugin Decomposition: Phase 2
● Phase 1 completed during Kilo
● Phase 2 will completely remove all third-party code from
the main Neutron repository
● Split out the reference implementation plugin into it’s own
repository
● Advanced services decomposition as well
● With governance changes, most repositories are now
being added into the Neutron Stadium
21. Neutron and nova-network
● Lots of time spent cross pollinating between Neutron and
Nova teams
● Many shared sessions in Vancouver
● PTL sync points
● Neutron has supported the same deployment models as
nova-network for many years
● These are documented now
● Installation guide removed references to nova-network
● New installs are now pointed to Neutron at installation time
● Neutron part of tag “starter-kit:compute”
● https://review.openstack.org/#/c/196438/
22. DefCore and Networking
● DefCore taking on networking this cycle
● Neutron will be the networking choice for DefCore
23. Neutron QoS
● Liberty focus is to enable bandwidth limiting
● We will also layout the QoS models for future API and
model extensions introducing additional QoS concepts
● QoS policies apply either per-port or per-network
● Feature branch entered merge queue to master moments
ago!
24. Neutron LBaaS V2
● Support for Layer-7 switching (e.g. content based routing)
● Support Octavia as the default reference implementation
● Service-VM based implementation using haproxy
25. Flavor Framework
● Way for operators to offer network services to their clients
● Allows separation of driver functionality and configuration
from consumers of services
● Operators can configure additional vendor features in an
end-user agnostic way
26. NFV Work
● Working with the NFV sub-team in OpenStack to integrate
features relevant in this space
● More seamlessly connect hardware and neutron L2
segments (e.g. with Ironic)
● Unaddressed port (e.g. port without an l3-address and
subnet attachment)
● Trunk ports to virtual machines
27. Role Based Access Control
for Networks
● Currently, the shared network concept is not granular
● This work will allow for a more granular approach and
allow tenants to share network resources with other
tenants
● Allows an operator to define a network with limited
access, but also covers the case where operators pre-
create networks for tenants to connect to
28. Pluggable IPAM
● Create a pluggable IPAM system inside of Neutron
● Allows the use of third-party and vendor IPAM system
● Separates IPAM from Neutron core DB model
● Liberty
● Reference implementation available as alternative
● Enables third-party systems
● Mitaka
● Migration provided to new reference
● Old reference will be removed
29. Prefix Delegation
● Assignment of tenant IPv6 subnets from PD server
● Alternative to IPAM for IPv6
● Handles the routing next hop
30. DNS Names
● DNS name set on a port
● It will be used for local DNS lookups with dnsmasq
● In Mitaka, it can be given to an external DNS system
32. Address Scopes
● Subnet pools are assigned to a scope
● No Duplicate addresses
● Routing will not traverse scopes without NAT
● No NAT for routing in the same scope, even “externally”
33. Routed Networks
● Bound the L2 domain
● e.g. route to the top-of-rack
● Not solved by overlays
● For large shared and external networks
● Both “static” and “dynamic” routing
● Schedule instances where IPs are available
● Neutron API and model changes are likely
34. BGP Announcements
● Neutron to speak BGP to the datacenter
● Next hop for subnets in the same address scope
● Floating IPs
● Tenant networks
35. Service Function Chaining
● The idea is simple:
● Service VMs need to be attached at points in the network
● Traffic needs to be steered into these ports
● Create a traffic steering model for chaining which uses
Neutron ports
● Work is being done in a Neutron Stadium project
● networking-sfc project
● “release:independent”
● Expect a release later this fall
36. Container Networking in OpenStack
● Container networking and VM networking working in
harmony: Enter Kuryr
● Kuryr is a generic Docker remote driver which connects
containers to Neutron APIs
● Provides containerized images of common Neutron plugins
● Works as a translator between the Container Network Model and the
Neutron API
● A small snippet of code for “plugging” containers in is required
● Focus is to satisfy Magnum project’s networking
requirements for containers
● Being developed in Neutron stadium!
37. OVN
● OVN is Open Source virtual networking for Open vSwitch
● Provides L2/L3 virtual networking
● SGs
● L2/L3/L4 ACLs
● Multiple tunnel overlays (STT and Geneve)
● ToR and software-based logical to physical gateways
● Code is being developed in Neutron Stadium!
● OVN itself in OVS repo
● Neutron plugin in networking-ovn repo
● How is OVN different?
● No agents for simplified deployment
● SGs utilize in-kernel connection tracker support
● DPDK-based and HW accelerated gateways
38. Thank you for your support!
[Neutron] on openstack-dev mailing list
#openstack-neutron Freenode