Sun World Bana Hills, Vienam Part 2 (越南 巴拿山太陽世界 下集).ppsx
SDN: Situação do mercado e próximos movimentos
1. SDN: situação do mercado e
próximos movimentos
Prof. Christian Esteve Rothenberg
FEEC/UNICAMP
27 de agosto de 2014
2. Agenda
SDN
• An evolving paradigm
• Understanding Different Models
• Hybrid Deployments
OpenFlow Future
• Challenges: Protocol versions and Model
• Work at ONF, industry and academia
3. Disclaimer/Warning
• Ack the credits for most of the content
• Wake Up! Lots of content ahead!
(especially for a 35min talk)
4. Rethinking the “Division of Labor”
Traditional Computer Networks
Data plane:
Packet
streaming
Forward, filter, buffer, mark,
rate-limit, and measure packets
Source: Adapted from J. Rexford
5. Track topology changes, compute
routes, install forwarding rules
Control plane:
Distributed algorithms
Rethinking the “Division of Labor”
Traditional Computer Networks
Source: Adapted from J. Rexford
6. Collect measurements and
configure the equipment
Management plane:
Human time scale
Rethinking the “Division of Labor”
Traditional Computer Networks
Source: Adapted from J. Rexford
7. Software Defined Networking (SDN)
API to the data plane
(e.g., OpenFlow)
Logically-centralized control
Switches
Smart,
slow
Dumb,
fast
8. Death to the Control Plane!?
• Simpler management
– No need to “invert” control-plane operations
• Faster pace of innovation
– Less dependence on vendors and standards
• Easier interoperability
– Compatibility only in “wire” protocols?
• Simpler, cheaper equipment
– Minimal software
Source: Adapted from J. Rexford
10. Old Economic Model
Can buy HW from anyone (theoretically)
•HW becomes interchangeable,
if it supports OpenFlow
Can buy SW from anyone
•Runs on controllers,
so doesn’t need to run on switch
SDN platform sold separately from ctrl apps?
•Would require stable and open platform
interface
•Currently a debate within ONF….
•Much less lock in (we hope)
New Economic Model
Role of OpenFlow • Architecturally: boring
– Just a switch specification….zzzzzz
• Economically: hugely important
– Could help break HW vendor lock-in
Source: Adapted from S. Shenker
13. One SDN to rule them all
Actually not, different reasonable models and
approaches to SDN are being pursued
One SDN controller to rule them all, with a
discovery app to find them,
One SDN controller to tell them all, on
which switchport to bind them.
In the Data Center, where the packets fly.
Source Poem: http://dovernetworks.com/?p=83
14. SDN Models
SDN can be considered in terms of different models:
• Yesterday’s SDN: Automation and some level or device/network
programmability using (vendor- and platform-specific) CLI commands and
APIs (for legacy management protocols and systems) on a per-device basis
to indirectly affect the network state
• Canonical/Open SDN: A Networking/Operating/System that oversees the
network data plane and hosts a number of “control programs” that implement
networking services (e.g. OpenFlow model). Split / Decoupled control plane!
• Broker SDN: An API-driven (software-driven / SDN-augmented) hybrid
approach where a broker interacts which applications to affect the network
so that apps are more effective, efficient and/or offer better user experience
• Proactive / Declarative SDN: Compiler that translates a high-level
language in which an operator defines what they want from the network and
compiles it into data plane instructions and configuration
• Overlay SDN: An approach where the network edge is programmed by an
SDN controller to manage tunnels between hypervisors and/or ToR
switches. Underlay network control plane untouched.
16. Canonical / Open SDN with Network OS
e.g. the OpenFlow model
Source: Chris Grundemann
17. OpenFlow : Not the Only SDN Tool
Vendor APIs
• Cisco: Open Networking Environment (ONE), EEM (Tcl), Python
scripts)
• Juniper: Junos XML API and SLAX (human-readable XSLT)
• Arista EOS: XMPP, Linux scripting (including Python and Perl)
• Dell Force10: Open Automation Framework (Perl, Python, NetBSD
shell)
• F5: iRules (Tcl-based scripts) Source: I. Pepelnjak
18. Broker / API-based SDN:
A SW-driven / SDN-augmented / Hybrid approach
Source: K. Kompella, slides-85-sdnrg-2.pdf
19. Broker / API-based SDN
Example: I2RS (Interface to the Routing System)
Additional SDN Function
Applications need to dynamically:
•Augment routing, based on:
•Policy
•Flow and application awareness
•Time and external changes
With knowledge of:
•Topology (active & potential)
•Network events
•Traffic measurement
•Etc.
Advanced SDN Use Cases
Programming the Routing Information Base
For example, adding static routes
Setting routing policy
Control how the FIB is built
Other router policies
Modify BGP import/export policies
Topology extraction
Pull routing information (including SRLGs)
from network
Topology management
Create virtual links by making connections in
lower layers
Service management
Request LSPs, connections, pseudowires
Bandwidth scheduling
“Set up a VPN” Source: Adrian Farrel
20. Broker / API-based SDN
Software-driven Networks
• More than play on words :
– some in the industry refer to SDN as software-driven networks,
as opposed to software-defined networks.
• Rather than viewing the network as being comprised of
logically centralized control planes with brainless network
devices,
– one views the world as more of a hybrid of the old and the new
• Hybrid approach
– some portion of networks operated
by a logically centralized controller,
– other parts would be run by the
more traditional distributed control plane
Control-plane component(s) Data-plane component(s)
28. Hybrid Networking in SDN
Source: S. Vissicchio et al. Opportunities and Research Challenges of Hybrid Software Defined
Networks. In ACM Computer Communication Review, 44(2), April 2014.
29. Trade-offs of Hybrid Networking in SDN
• Tradeoff analysis suggests that the combination of centralized and
distributed paradigms can provide mutual benefits.
• Future work is needed to devise techniques and interaction mechanisms
that maximize such benefits while limiting the added complexity of the
paradigm coexistence.
• Combination of hybrid models: A wider range of tradeoffs can be
obtained by combining hybrid models together.
Source: S. Vissicchio et al., CCR 14
35. Bare Metal Switching and Programming
Blows Up the SDN Blob!
Source: Adapted from Shrijeet Mukherjee, Cumulus Networks
Programmable Control App
In-the-box
Out-of-the-box
37. A view on ONF evolving attitude1
Early attitude in the ONF: Me! Me!
– “My feature, my feature!”, and “More flex! Optical! Wireless! …! ”
– Fully Programmable Dataplane!
– Protocol independent generic commands! (Byte offset, bit mask, etc)
– “New chips will come and it will all be good!” – But…
Growing attitude: We need this stuff to work
•Now, lots of OF1.3 capable boxes (Nov’13 plugfest)
– don’t work together that well…
– how do I code (or test) using optional features?
•Then the CAB formed (responsibility: nudge chipmakers!)
– Chip Advisory Board explained how to get chipmakers to make new
chips (== biz case)
Revolution!
Reality strikes!
Business case (still) dominates!
1
Adapted from: Curt Beckmann, Brocade, CURT’S ONF UPDATE, NFD7
39. In the
Beginning…
• OpenFlow was simple
• A single rule table
• Priority, pattern, actions,
counters, timeouts
• Matching on any of 12
fields, e.g.,
• MAC addresses
• IP addresses
• Transport protocol
• Transport port numbers
Version Date # Headers
OF 1.0 Dec 2009 12
OF 1.1 Feb 2011 15
OF 1.2 Dec 2011 36
OF 1.3 Jun 2012 40
OF 1.4 Oct 2013 41
• Proliferation of header fields
• Multiple stages of heterogeneous tables
• Still not enough
(e.g., VXLAN, NVGRE, STT, …)
Over the Past Five
Years…
Source: P4, http://arxiv.org/abs/1312.1719
40. Device / OF Option Alignment
Source: Curt Beckmann, Brocade
41. Mapping low level instructions
when pipelines differ
00 Count 16
01 Prod 0
02 Bit 1
00 Prod RegX * RegY
05 Bit <<= 1
06 RegY <<= 1
07 Count -= 1
A smart compiler can see it’s a “multiply”03 If (RegX & Bit == 0) goto 05
04 Prod += RegY
08 If (Count != 0) goto 03
[Or: If (Bit != 0) or (RegY != 0) ]
As long as it can see the complete set of code
But if the code is in scattered in time?
If we ask the compiler to do the translation
piecemeal, it becomes impossible
Table0
Table1
Table2
Table3
Similarly, mapping multi-table OF to legacy ASICs
is tricky or worse… if we must do it all at run-time
But we actually don’t have to do it ALL at run-time
41Source: Curt Beckmann, Interoperable OpenFlow with NDMs and TTPs
42. Future SDN Switches
•Where does it stop?!?
• Simplicity would be nice, but it just isn’t practical
•Configurable packet parser
• Not tied to a specific header format
•Flexible match+action tables
• Multiple tables (in series and/or parallel)
• Able to match on all defined fields
•General packet-processing primitives
• Copy, add, remove, and modify
• For both header fields and meta-data
42
Source: P4, http://arxiv.org/abs/1312.1719
44. “OpenFlow 2.0”
44
Target Switch
SDN Control Plane
Populating:
Installing and
querying rules
Compiler
Configuring:
Parser, tables,
and control flow
Parser & Table
Configuration
Rule
Translator
Source: P4, http://arxiv.org/abs/1312.1719
45. P4 Language
Programming Protocol-Independent Packet
Processing
P4 Compiler
•Parser
• Programmable parser: translate to state machine
• Fixed parser: verify the description is consistent
•Control program
• Target-independent: table graph of dependencies
• Target-dependent: mapping to switch resources
•Rule translation
• Verify that rules agree with the (logical) table types
• Translate the rules to the physical tables
46
Source: Programming Protocol-Independent Packet Processors,
http://arxiv.org/abs/1312.1719
46. Ongoing efforts towards an alternate
OF+
• OpenFlow 1.x
• Vendor-agnostic API. But, only for fixed-function switches
• An alternate future?
• Protocol independence
• Target independence
• Reconfigurability in the field
• P4 language: a straw-man proposal
• To trigger discussion and debate. Much, much more work to do!
• Related Work
• Abstract forwarding model for OpenFlow
• Kangaroo programmable parser
• NOSIX portability layer for OpenFlow
• Protocol-oblivious forwarding (POF) by Huawei
• OpFlex by Cisco ?
• Table Type Patterns in ONF FAWG
47
47. ONF FAWG work on Table Type
Patterns (TTP)
Defining Datapath Models in advance
• “Datapath Model” must be detailed, unambiguous
• Must spell out matches and actions allowed in each table
• So no “pipeline surprises” at run time
• Apps will have different needs…no single DP model will work
• So, a range of Datapath Models
• Powerful platforms might support more than one model
• Some apps may work on more than one model
• Models need not be specified by ONF, others can do it too
• App and switch must agree on same model
• A multi-vendor ecosystem means sharing common language
• “Agree” means synching up… “negotiation”
• “Negotiable Datapath Model” NDM
• Must evolve over time as OF evolves Source: ONF FAWG
48. How TTPs Can Help
• TTPs are “Table Type Patterns” that market participants can
define
• TTPs are 1st gen of “Negotiable Datapath Models” (NDMs)
• TTPs = “pre-baked pipelines” specific switch funcs in OF1.x terms
• With TTPs, pipelines can be mapped before run-time
• Switches, controllers become deterministic (as they need to be)
• Once TTP is agreed, Controller uses only TTP messages, Switch supports all TTP messages,
All messages are valid OF1.x messages
• TTP Examples:
• “VID Mapping L2 Switch”, “VXLAN Gateway”, “NVGRE Gateway”, “v4
Router w Ingress ACL”, “v6 Router w Egress ACL”, “MPLS Edge & Core
Router”
• TTPs help sort out interoperability
• Product sheets list supported TTPs, clarifies what works with what
Source: ONF FAWG
50. TTP Benefits
• Ease of development within a context of diversity
• Done such that interoperability is deterministic
• Interoperability visible to market participants
• No logjams required by “standardized profiles”
• Framework is for products that are “TTP aware”
• Key for determinism when multiple flow tables needed
• But TTPs also turned out quite useful for single tables!
• TTPs can serve as precise test profiles
• Can resolve the “optional feature” challenge
• Visible to market participants
Source: ONF FAWG
51. SDN asks (at least) three major
questions
Where the control plane resides
“Distributed vs Centralized” ?
How does the Control Plane talk
to the Data Plane ?
How are Control andHow are Control and
Data Planes programmed ?Data Planes programmed ?
Source: T. Nadeu, slides-85-sdnrg-5.pptx
52. SDN asks (at least) three major questions
Where the control plane resides
“Distributed vs Centralized” ?
• What state belongs in distributed protocols?
• What state must stay local to switches?
• What state should be centralized?
•What are the effects of each on:
- state synchronization overhead
- total control plane overhead
- system stability and resiliency
- efficiency in resource use
- control loop tightness
Source: E. Crabbe, slides-85-sdnrg-7.pdf
1
53. SDN asks (at least) three major questions
• Prop. IPC
• OpenFlow (with or w/extensions)
• Open Source south-bound protocols
• Via SDN controller broker and south-bound plug-ins
• Other standardized protocols
•What are the effects of each on:
-Interoperability, Evolvability, Performance
-Vendor Lock-in
How does the Control Plane talk
to the Data Plane ? 2
54. SDN asks (at least) three major questions
• Levels of Abstraction
• Open APIs
• Standardized Protocols
•What are the effects of each on:
-Data plane flexibility
-Integration with legacy
-Interoperability (CP / DP)
-Vendor lock-in
Source: E. Crabbe, slides-85-sdnrg-7.pdf
How are Control andHow are Control and
Data Planes programmed ?Data Planes programmed ? 3
55. Concluding thoughts on SDN
• Remember: SDN is not a protocol (OpenFlow yes);
– SDN is an operational and programming architecture.
• SDN starts a new dialogue about network programmability,
control models, the modernization of application interfaces
to the network, and true openness around these things.
• From device-centric HW-constrained networking to
network-wide service-oriented SW-defined networking
– SDN is a new approach to the current world of networking,
but it is still networking.
• Vendor Lock-in : It is about features, be it SW or HW
• Cost discussion : May be shifted from HW to SW / services
56. Further reading: “Software-Defined Networking: A Comprehensive Survey”
http://arxiv.org/abs/1406.0440
Contributions welcome:
https://github.com/SDN-Survey/latex/wiki
Obrigado!
(mais) Perguntas?
58. SDN: a Fundamental Step Forward
• or just a new whip to beat vendors with?
What makes SDN attractive?
• The idea that a network is more than the sum of its parts
– I.e., take a network-wide view rather than a box-centric view
• The idea that creating network services can be a science
rather than a set of hacks on hacks on hacks
– Especially hacks that vary by box, by vendor and by OS version
• The idea that there should be a discipline and
methodology to service correctness
– Rather than testing (and more testing), declaring victory, only to
fail in the real world because of some unanticipated interaction
Source: K. Kompella, slides-85-sdnrg-2.pdf
59. SDN is a real step
1. if SDN gives us an abstraction of the network
2. if, through this abstraction, we have a means of reasoning
about the network and network services
3. if SDN offers a means of verifying correct operation of the
network or of a service
4. if SDN offers a means of predicting service interaction
5. Finally, if SDN offers a means of setting (conceptual)
asserts by which we can get early warning that something
is wrong
Source: K. Kompella, slides-85-sdnrg-2.pdf
60. The IRS Architecture
Data Plane
FIB
RIBs and RIB ManagerPolicy DB
Routing and
Signaling
Protocols
Topology DBOAM, Events
and
Measurement
IRS Agent
IRS Client IRS Client
Router
Server
ApplicationApplication
Application
IRS Protocol &
Data Encoding
Source: Adrian Farrel
61. OpenFLow
Data
Center
SDN
Controller4-ALTO 3-SDN orchestrator
Applications (Internet, CDN, cloud…)
5-OAM Handler
1- TED 6-VNTM 2-PCE
7-Provisioning Manager
OpenFlow
MAN
Domain
IP/MPLS
core
OpenFlow
Optical Domain
OPENFLOW
OPENFLOW
GMPLS
Optical
Domains
NETCONF
MPLS
MAN
PCEP OPENFLOW CLI
Most of these building
blocks are still on
definition and
standardization process
SDN controller based on standard building
blocks
E2E networks might be pure OpenFlow based one day,
but the migration process will take some time
62. Application-Based Network
Operation
• SDN tools provide high-function, but low granularity
• There is a need to coordinate SDN operation to
provide service-level features
• Some components already exist or are proposed
– Orchestrators
– OpenFlow Controllers
– Routing protocols
– Config daemons
– IRS Client
– Virtual Network Topology Manager
• Need a wider architecture to pull the tools together
– A framework in which the SDN components operate
Source: Adrian Farrel
Notes de l'éditeur
Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both?
Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both?
Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both?
Further reading:
http://theborgqueen.wordpress.com/2014/03/31/the-legend-of-sdn-one-controller-to-rule-them-all/Source Poem: http://dovernetworks.com/?p=83
Taking the broader generalized view on SDN different models can be defined to refer to the SDN control applied to the network.
Historically, network configuration state has remained largely static, unchanged, and commonly untouchable.
Manual configuration and CLI-based configuration on a device-by-device basis was the norm, and network management constituted the basic “screen scraping” or use of Expect scripts as a way to solve manageability problems and core scalability issues (cut-and-paste methodology).
The highest end of programmatic interfaces included XML interfaces and onboard Perl, Tk/Tcl, and Expect.
With multiple routers, switches, and servers working as a system (and services that are routing traffic across multiple domains with different users, permissions, and policies), control and management state needs to be applied across the network as an operation.
Element-by-element management simply doesn’t provide enough flexibility and agility or the notion of dynamic or ephemeral data (configuration and state not persistently held in the config file).
Source: D. Ward. Foreword in the SDN book. Figure by Chris Grundemann
OpenFlow switching devices and to establish communication between them and controllers.
Using the OpenFlow southbound API the controller can add, update, and delete flow entries, both reactively (in response to packets) and proactively.
OpenFlow provide a vendor agnostic nourthbound API from the network control layer to the network devices (Data Layer) .
Applications are implemented on top of OpenFlow controllers act a single point of control to define and enforce control policies.
The model promises customers to be able to buy devices from different vendors and use third party (or home-made) controllers and management applications.
Source (Figure) . Pepelnjak (iioshints.info)
So, by now one should already have noted that OpenFlow is not the only SDN tool. Even in the purist view you need to consider the companion ONF-standardized configuration protocol OF-Config.
In addition to IETF activities like Netconf and I2RS that allow new bidirectional ways to let applciations talk to the network.
Vendors have worked on defining and opening APIs to allow better programmatic interfaces with their devices.
These tools allow for SDN as a broker models of networking.
“In the broker model, applications interact with the network via the broker so that they or the network can be more efficient, enforce target SLAs, or provide a more satisfactory end user experience. The obvious distinction between the models is in the type of application that the architecture is meant to service (the breadth of the solution).”
--Source: T. Nadeau, SDN Book.
The broker model corresponds to the figure of including a simple &quot;orchestrator&quot; and various &quot;plug-ins&quot; between the orchestrator function and network protocols
To this end, several interfaces and mechanisms to address issues such as enabling the broker to manipulate and interact with the control planes of devices such as routers and switches, as well as a discourse between different orchestrators are require
Policy considerations for the broker and inter-broker functions are also to be considered.
This model can also be related to the hybrid SDN that aims at augmenting existing distributed control planes with centralized control functions, ultimately allowing applications that interact (program, set/read state) with the network.
Source: http://www.olddog.co.uk/IIR-SDN-Farrel.ppt
Applications need to dynamically:
Augment routing, based on:
Policy
Flow and application awareness
Time and external changes
With knowledge of:
Topology (active & potential)
Network events
Traffic measurement
Etc.
Software Driven Networks (SDN) is an approach to networks that enables applications to converse with and manipulate the control software of network devices and resources. SDNs are comprised of applications, control software, and interfaces to services that are hosted in an overlay or logical/virtual network as well as those possibly same components that comprise the underlying physical network. Modern applications require the ability to easily interact and manipulate these resources. Applications can benefit from knowing the available resources and from requesting that the network makes the resources available in specific ways. To this end, there is a requirement to couple applications more closely to the underlying resources on which they depend, consume and interact with.
Further reading
http://www.ietf.org/proceedings/82/sdn.html
http://tools.ietf.org/html/draft-stiliadis-sdnp-framework-use-cases-01
http://tools.ietf.org/html/draft-nadeau-sdn-problem-statement-01
Sources: David Ward Foreword, SDN book by T. Nadeau, IETF sdn
An SDN controller/framework can be viewed as a network compiler, although it is very possible to modify the model in the future to function as a compiler generating network element configuration. In this model, the high-level data model allows the application/operator to simply express intent, and the controller executes this by compiling the intent into primitives (i.e., code) that it executes.
The compiler uses the high-level data model to convert API requests for network actions into low-level data model for implementation via the control code.
Juniper Networks refines the idea with the idea of compilation by invoking the SDN as network compiler concept. This created high-level, user-friendly/app-friendly, data models that translate into lower-level network strategy/protocol specific primitives (e.g., L3VPN VRFs, routes, and policies).
The notion of SDN controller as a Compiler also applies in the Contrail SDN offer (acquired by Juniper) to configure virtualized data centers to deliver IaaS.
Source: T. Nadeau, SDN Book.
The configuration nodes are responsible for transforming any change in the high-level service data model to a corresponding set of changes in the low-level technology data model. This is conceptually similar to a just-in-time (JIT) compiler—hence the term “SDN as a compiler” is sometimes used to describe the architecture of the Contrail system.
The control nodes are responsible for detecting the desired state of the network as described by the low-level technology data model using a combination of southbound protocols including XMPP, BGP, and NETCONF.
OpenContrail contains a data model which describes the high level service layer abstractions. This data model contains objects such as virtual networks, virtual machines, and policies. The objects in the service data model can be created, modified, deleted, and queried using north bound REST APIs. In fact, the north bound REST APIs are automatically generated from this data model.
Between the service data model and the technology data model sits a transformation engine. The transformation engine is responsible for translating the service data model to the technology data model. When you invoke the north bound REST APIs to instantiate a virtual network object in the service data model, the transformation engine wakes up and figures out “Hmmm…. you way you want a virtual network. That means I need to create these routing instances over here, and those overlay tunnels over there, and I need to put these routes in those routing instances.” The transformation engine then instantiates objects in the technology data model to represent the existence of those low level objects.
There are two types of data models—the high-level service data model and the low-level technology data model. Both data models are described using a formal data modeling language that is currently based on an IF-MAP XML schema although YANG is also being considered as a future possible modeling language. The high-level service data model describes the desired state of the network at a very high level of abstraction, using objects that map directly to services provided to end users—for example, a virtual network, a connectivity policy, or a security policy.
The low-level technology data model describes the desired state of the network at a very low level of abstraction, using objects that map to specific network protocol constructs such as a BGP route target or a VXLAN network identifier.
In a system managed through declarative control, underlying objects handle their own configuration state changes and are responsible only for passing exceptions or faults back to the control system. This approach reduces the burden and complexity of the control system and allows greater scale. This system increases scalability by allowing the methods of underlying objects to request state changes from one another and from lower-level objects
OpFlex is an open and extensible policy protocol for transferring abstract policy in XML or JavaScript Object Notation (JSON) between a network policy controller such as the Cisco APIC and any device, including hypervisor switches, physical switches, and Layer 4 through 7 network services. Cisco and its partners are working through the IETF and open source community to standardize OpFlex
Cisco is positioning its overall Application Centric Infrastructure in the larger SDN context: as an optimal alternative to, and yes, a discouragement or inhibitor or deterrent to imperative SDN (e.g. OpenFlow, OVSDB) adoption and proliferation. And though it can work within an OpenFlow/OVSDB SDN, Cisco wants you to buy OpFlex and ACI, and not buy OpenFlow/OVSDB and VMware&apos;s NSX and upcoming OpenStack Congress policy model. Anything to the contrary would be very unlike Cisco, all &quot;open&quot; positioning and posturing acknowledged.
Source: http://www.networkworld.com/community/blog/ciscos-mixed-messages
Further reading: http://blogs.cisco.com/datacenter/introducing-opflex-a-new-standards-based-protocol-for-application-centric-infrastructure/
the underlay/overlay concept in which the distributed control plane provides the underlay and the centralized control plane provides a logical overlay that utilizes the underlay as a network transport.
In the early days of sDN, there were some companies using the reactive hop-by-hop approach but most of them have since evolved their position. they now also support tunnels to create overlays, support the proactive model in addition to the reactive model, and use a combination of fine-grained flows in the virtual edge with coarse-grained flows in the physical core to avoid scaling limitations.
the overlay approach, by contrast, uses overlay tunnels to virtualize or “slice” the network. these tunnels generally terminate in virtual switches or virtual routers in hypervisors, but they can also terminate in physical routers or switches for “gateway” use cases.
the overlay approach was pioneered by software vendors such as vMware (Nicera) and Microsoft. but it is now widely supported by major networking vendors as well.
OpenStack seems to have a great architecture: all device-specific code is abstracted into plugins that have a well-defined API, allowing numerous (more or less innovative) implementations under the same umbrella orchestration system.
In the figure above. External Manager == SDN Controller (e.g., OpenDaylight)
Figure source and further reading: http://blog.ipspace.net/2013/10/openstack-quantum-neutron-plug-in-there.html
The emergent SDN ecosystem features a broader view on the network control layer that interfaces the network elements with multiple protocols and exposes higher-level sourhbound APIs.
Using the southbound API (e.g., OpenFlow, ForCES, PCEP), the controller can add, update, and delete flow entries, both reactively (in response to packets) and proactively.
On top of the network control layer and orchestration layer that interacts with multiple applications represents an overarching approach to provide even higher-level northbound APIs and to let multiple controllers interwork.
SDN promises to ease design, operation and management of communication networks. However, SDN comes with its own set of challenges, including incremental deployability, robustness, and scalability. Those challenges make a full SDN deployment difficult in the short-term and possibly inconvenient in the longer-term.
Topology-based hybrid SDN (TB hSDN) model relies on a topological separation of the nodes controlled by each paradigm. More precisely, the network is partitioned in zones, so that each node belongs to only one zone.
Service-based hybrid SDN (SB hSDN) model, CN and SDN provide different services. To implement some services, like network-wide forwarding, the two paradigms can span an overlapping set of nodes, controlling a different portion of the FIB of each node. Examples network-wide services like forwarding are delegated to CN, while SDN provides edge-to-edge services like enforcement of traffic engineering and access policies, or services requiring full traffic visibility like monitoring
Class-based hybrid SDN (CB hSDN) model is based on the partition of traffic in classes, and on the division of classes into CN-controlled and SDN-controlled. Contrary to TB hSDN, CN and SDN typically span all the nodes in this model, controlling a disjoint set of node FIB entries. Moreover, in contrast with SB hSDN, each paradigm realizes all network services for the traffic classes assigned to it. An example of SDN (resp., CN) fills the FIB entries of each node to control a small (resp., big) portion of traffic.
Integrated hSDN model where SDN is responsible for all the network services, and uses CN protocols as an interface to node FIBs. For example, it can control forwarding paths by injecting carefully-selected routes into a routing system or adjusting protocol settings (e.g., IGP weights)
A comparative analysis of the presented hybrid SDN models considering the following dimensions:
i) expressiveness and management simplicity, in terms of easiness of non IP-based forwarding and enforcement of middlebox policies;
ii) robustness and scalability, as architectural concerns;
iii) deployment costs, in terms of hardware upgrade costs, custom software to be produced, and needed expertise;
iv) flexibility and paradigm complexity, especially as related to the coexistence of multiple paradigms.
In the table, comparison dimensions and network architectures are respectively disposed on rows and columns. Rows are divided in three groups, corresponding to the hardest challenges for CN (rst group), SDN (second group), and hybrid SDN (third group) models.
General hybridization benefits: With respect to CN networks, hybrid models enable flexibility (e.g., easy match on any packet field for middleboxing) and SDN-specific features (e.g., declarative management interface). At the same time, they partially inherit robustness, scalability, technology maturity and low deployment costs from CN.
Hybridization drawbacks: While combining CN and SDN enables new fine-tunable tradeoffs, hybrid models have their own peculiar drawbacks. In particular, the need for managing heterogeneous paradigms and ensuring profitable interaction between them is especially relevant, since it affects the realizability of network-wide services. The impact of such heterogeneity actually depends on the model.
FP/SDN
Properties:
-- Complete Separation of CP and FP
-- Centralized Control
-- Open Interface/programmable Forwarding Plane
-- Examples: OF, ForCES, various control platforms
OL/SDN Properties: -- Retains existing (or simplified) Control Planes -- Programmable overlay control plane -- Examples: Various Overlay technologies
CP/SDN Properties: -- Retains existing (distributed) Control Planes -- Programmable control plane -- Examples: PCE, I2RS,BGP-LS, vendor SDKs
What are &quot;Bare Metal Switches&quot;? Bare Metal refers to the switches where the software and hardware are sold separately, that is, you can buy just the &quot;bare metal&quot;. In other words, the end-user is free to load an operating system of their choice. ONL is an open source distribution of Linux for Bare Metal switches.
The network OS must comply with three key component areas within the device:
The CPU and the motherboard – You need to make sure you have full access to the driver code of all chips to ensure that complete functionality can be accessed through the new OS. This need is new and particularly topical for Ethernet switches because servers don’t have U-Boot functionality–they have BIOS.
The universal boot loader (U-Boot) – As part of a modern Linux OS, the U-Boot is essential in both white box servers (think Pixie Boot) and white box switches. This Linux OS that holds all the details on the shell of the box itself like the fans, lights and USB ports as examples that map directly to the shell or actual box. Every manufacturer has a different U-Boot loader; and even within product families of the same vendor, the U-Boot code could be different.
The ASIC – Every white box or legacy switch has an ASIC in it. The ASIC provides performance scaling and is key to how switching helps build a high-bandwidth network. Whether you leverage merchant silicon (Broadcom, Intel, Marvell, Mellanox), or your own ASICs, you need to have a means to connect the chip to your software stack. That is through the ASIC programming interface or software development kit (SDK).
…. the CPU and the motherboard (or the BSP), “U-Boot” (or universal boot loader), and the ASIC’s APIs. All three collectively need to be matched to the network OS in order for you to port your OS onto that same piece of metal. This is why all the so-called white box, software defined networking (SDN) vendors either sell the complete switch and OS solution or provide a list of pre-qualified devices so you can directly buy the hardware. Both paths deliver the same solution.
Switch Light OS is the commercial SDN switching software from Big Switch Networks. It builds upon the Open Network Linux distribution available within the Open Compute Project.
ONL is the open source Linux operating system distribution for &quot;bare metal&quot; ethernet switch hardware. By separating software and hardware procurement as well as deployment decisions, ONL brings the server or compute industry models to the networking infrastructure.
ONL is a base-level operating system and provides a management interface to the switching hardware. It uses the Open Network Install Environment (ONIE) to install onto on-board flash memory. The components in a standard ONL distribution include: Debian Linux kernel, set of device drivers, installation scripts and a net-boot capable zero touch networking bootloader, customized with enhanced network boot functionality for a variety of bare metal switch devices. ONL also includes several advanced features that optimize headless switch operations, and improved performance by minimizing writes to flash memory.
By leveraging the emerging industry standard ONIE and ONL, and a hardware compatibility list shared across the bare metal switching ecosystem, Switch Light is designed with migration in mind. On startup, a user can opt to boot up either the Switch Light OS, traditional networking OSs from other vendors or a hardware diagnostic OS. There are no hardware or cabling changes required to go between controller-based and traditional box-by-box networking designs -- a simple, low-risk migration path to SDN.
Install the Switch Light OS on existing or separately procured hardware switches as a part of SDN Products deployment.
Enter the world of data center networking. Today, switches are black boxes with integrated data planes, control planes and feature sets from one single vendor. There are various levels of programmability and upgradability, but you cannot leverage the power of various hardware vendors and operating systems for your specific application requirements, and cost for sufficient capacity and performance is prohibitive. Imagine a world where the networking gear is decoupled from the network operating system. Why now? Merchant silicon has surpassed custom Applications Specific Integrated Circuits (ASICs). This in turn enables enterprise grade networking hardware from Original Device Manufacturers (ODMs) and gets you robust switches with incredible price-performance. The only missing piece is a powerful, reliable, and proven operating system, one that leverages scale and collaboration to enable scores of applications.
Source: http://cumulusnetworks.com/rethink_network/
Field Explosion in OpenFlow
The problem they’re hoping to solve is the shifting nature of the OpenFlow protocol.
Consider the packet header. OpenFlow checks certain fields in the header (the destination IP address, for example) and looks for a match in its flow tables. Having found a match, OpenFlow takes the action dictated in the table.
But the number of fields OpenFlow checks keeps increasing, from 12 with OpenFlow 1.0 up to 41 with OpenFlow 1.4. The spec keeps getting more complicated as it gets extended into different use cases.
OF 1.4 did stop for lack of wanting more, but just to put on the breaks.
This is natural and a sign of success of OpenFlow:
enable a wider range of controller apps
expose more of the capabilities of the switch
E.g., adding support for MPLS, inter-table meta-data, ARP/ICMP, IPv6, etc.
New encap formats arising much faster than vendors spin new hardware
Source: http://www.ethernetsummit.com/English/Collaterals/Proceedings/2013/20130402_OFS__S13_Beckmann.pdf
OFS1.0 limits both schools, but especially DP innovation
Multiple Flow Tables Multiple flow tables added in OFS1.1
Multiple tables allows for standardized complex forwarding
Multiple tables offer lots of power, plus challenges Big architectural changes vs. OFS1.0… “What’s a flow?”
Trade off: Flexibility has a price: speed/power/density/cost
OpenFlow schools: All devices work with control plane innovation
GP CPUs & NPUs best for data plane innovation
OF-Switch concedes feature diversity, kinda:
Lots of optional features. But with many options, what works with what?
Should app developers use optional features? Or should they avoid them?
So x86, with “optional instructions”? Really complicates interoperability
Architecture diversity also matters
With single flow table, no problem
Multi-table OpenFlow changed the game
But the framework didn’t change
Implicit assumption: same messages are enough
Valid only if all boxes offer complete OpenFlow pipeline
Because no mapping would be required in that case
Bad news: hardware boxes don’t (yet?) offer complete OF pipelines
Even new silicon will have diversity over time
And platform OS will vary
So networking will vary at least as much as PC’s have varied
This may sound like a pipe dream, and certainty this won’t happen overnight.
But, there are promising signs of progress in this direction…
This would allow for the creation of table types that OpenFlow currently can’t handle. It also means normal Ethernet switching processes could be upended — you could order the switch to look up the destination MAC address before looking up the Ethertype (it’s normally done in the reverse order).
The result is that the switch could adapt to new types of switching, whether it’s OpenFlow 2.0 or something crazier.
Such switches don’t exist yet — and that’s apparently where Barefoot Networks comes in. The startup, rumored to be working on Ethernet chips, might be trying to build this fully reconfigurable switch
Source / Further reading:
http://www.sdncentral.com/news/openflow-2-0-bring-new-flexibility-switches/2014/03/
Two modes: (i) configuration and (ii) populating
Compiler configures the parser, lays out the tables (cognizant of switch resources and capabilities), and translates the rules to map to the hardware tables
The compiler could run directly on the switch (or at least some backend portion of the compiler would do so)
The new breed of switch, called a programmable, protocol-independent packet processor, would be configured using a high-level compiler — the authors propose the name P4 — where users could specify the rules to put into the tables. P4 is also where the logical dependencies between tables would be defined. (If Table 2 has some entries that call for moving packets to either Table 4 or Table 5, then those three have a logical dependency.) P4 would take all that information and tell it to the control plane, which would use OpenFlow 2.0 to implement the rules in the switches.
The paper points out some of the real tough challenges that come with a standard abstraction layer. The abstraction models each of the many protocols the switch supports with all the fields you would like to control and as such becomes unwieldy very quickly. A second and important limitation mentioned is the fact that switches are different. Different vendors implement functionality different, provide different capabilities and there is no easy mechanism to express the differences in these key functions. And these are often the types of functions that make switch buyers select one vendor over another. Creating a standardized superset of all abstraction layers of all hardware and software switches is very hard. From Plexxi’s perspective, one of the reasons we do not use OpenFlow is a need to abstract application workflows and topologies rather than individual or aggregate flows.
Source / Further reading:
http://www.plexxi.com/2014/02/openflow-evolution-standardized-packet-processing-abstraction-hard/#sthash.0Q9wq0cl.9CCGLjIo.dpbs
Parser:
Programmable parser: translate parser description into a state machine
Fixed parser: verify that the parser description is consistent with the target processor
Control program
Table graph: dependencies on processing order, opportunities for parallelism
Mapping to switch resources: physical tables, types of memory, sizes, combining tables
Rule translation
Verify rules agree with the types
Translate rules into the physical tables
Towards the goal of greater flexibility and portability, though stop short of designing a language and compiler
Many variables affect SDN architecture
Apps, Controllers and Switches
Topology and Traffic
Mapping multi-table OF is rather tricky, uncertain at run time
Meanwhile, production operators NEED determinism, confidence
Typically they get it via testing of apps, controllers and switches in a few topologies and a variety of traffic loads
With so much work done over and over prior to production run time… can’t we “remember” what the app needed from the switches, and how pipelines were mapped?
Why redo it at run-time?
Instead of x86, we propose C or Java as the parallel
Both can be compiled for optimization
C is cool because it can be very low level
Java cool because it supports multiple models
“byte code” model for run-time portability, also compilable
New framework: share switch pipeline “specs” before run-time
Comparable to picking the multiply instruction
Choose operands at run-time… that’s enough
To make it work, we must define pipelines in advance
The pipeline is a “datapath model”
Multiple unambiguous NDMs
App / controller and switch must agree on NDM
Process for “agreement” defined by FAWG and CMWG
NDMs based on, evolve with, OpenFlow architecture
1st gen NDMs are OFS1.x-based: “Table Type Patterns” (TTPs)
TTPs definable by ONF or ONF members
Using FAWG’s common language for TTPs
Anyone can define, so self-test scheme needed
Models have test info section for basic validation
3rd party testing can go further
Plus, a new framework!
To evaluate any SDN proposition one should at least ask these three questions.
Answering these questions allows to compare solutions from different vendors, and think about how the SDN offer may be integrated to the existing infrastructure and evolve over time.
To evaluate the technical merits / risks of an SDN solutions and/or whenever you work on developing / integrating an SDN solution, these critical questions should be carefully considered.
To evaluate the technical merits / risks of an SDN solutions and/or whenever you work on developing / integrating an SDN solution, these critical questions should be carefully considered.
To evaluate the technical merits / risks of an SDN solutions and/or whenever you work on developing / integrating an SDN solution, these critical questions should be carefully considered.
We have clarified SDN and its relation to OpenFlow.
A common denominator of any SDN definition is that a new way to communicate with networks shall be pursued, moving from device-centric low-level operations to automated higher-level network-wide control solutions.
The discussion on avoiding vendor lock-in is a sensitive one. Even with split control architectures where forwarding devices may be arguably interchanged, the lock-in of vendors is actually based on specific features, does not matter if they are in SW or HW (or a combination of both via extensibility of the southbound APIs). In any case the openness of APIs matters to reduce the potential lock-in.
Cost discussion is also another sensitive matter that shall not be easily overseen. Believing that SDN will magically reduce your total costs is not clear, since costs (profit margins) may be shifted from one component (e.g. HW) to others.
There is however light of hope in reducing OPEX by beans of automation and integrating network-related process and bringing agility to your core business.
SDN may be a real step forward in networking beyond a rebrand of product lines if the attractiveness of SDN is actually delivered.
A real step of SDN could be evaluated by considering if the SDN proposition you are looking into allows you to gain from these different ways of doing networking.
Source: http://www.olddog.co.uk/IIR-SDN-Farrel.ppt
The basic architecture for I2RS between applications using I2RS, their associated I2RS Clients, and I2RS Agents. Applications access I2RS services through I2RS clients. A single client can provide access to one or more applications. In the figure, Clients A and B provide access to a single application, while Client P provides access to multiple applications.
I2RS agents and clients communicate with one another using an asynchronous protocol. Therefore, a single client can post multiple simultaneous requests, either to a single agent or to multiple agents. Furthermore, an agent can process multiple requests, either from a single client or from multiple clients, simultaneously.
The I2RS agent provides read and write access to selected data on the routing element that are organized into I2RS Services. Section Section 4 describes how access is mediated by authentication and access control mechanisms. In addition to read and write access, the I2RS agent allows clients to subscribe to different types of notifications about events affecting different object instances.
An example not related to the creation, modification or deletion of an object instance is when a next-hop in the RIB is resolved enough to be used or when a particular route is selected by the RIB Manager for installation into the forwarding plane.
Source & further reading: http://datatracker.ietf.org/doc/draft-ietf-i2rs-architecture/
Source: http://www.olddog.co.uk/IIR-SDN-Farrel.ppt
Within the IETF Application based Network Operations (ABNO) is defined to provide a solution based on standard protocols and components. The main component of the ABNO architecture is the Path Computation Element (PCE)
The IETF ABNO architecture is based on existing standard blocks defined within the IETF (PCE, ALTO, VNTM...), which could be implemented either in a centralized or distributed way according to network operator requirements.