Managing a network requires support for multiple concurrent tasks, from routing and traffic monitoring, to access control and server load balancing. Software-Defined Networking (SDN) allows applications to realize these tasks directly, by installing packet-processing rules on switches. However, today's SDN platforms provide limited support for creating modular applications.
Join Bay Area Network Virtualization as Dr. Joshua Reich, Postdoctoral Research Scientist and Computing Innovation Fellow at Princeton University presents Pyretic - a new programmer-friendly domain-specific language embedded in Python that enables modular programming for SDN applications. Pyretic is part of the Frenetic Network Programming Language initiative sponsored by Princeton University and Cornell University, with support from the National Science Foundation, the Office of Naval Research, Google, Intel and Dell.
8. Controller Platform
Switch API (OpenFlow)
Monolithic Controller
Switches
App
Runtime
OpenFlow Programming Stack
Control Flow, Data Structures, etc.
*First Order Approximation
9. Programming SDN w/ OpenFlow
9
Images by Billy Perkins
• The Good
– Network-wide visibility
– Direct control over the switches
– Simple data-plane abstraction
• The Bad
– Low-level programming interface
– Functionality tied to hardware
– Explicit resource control
• The Ugly
– Non-modular, non-compositional
– Challenging distributed programming
10. Controller Platform
Switch API (OpenFlow)
Monolithic Controller
Switches
App
Runtime
Today: OpenDaylight, POX, etc.
Data and Control Flow
~ Registers, adders, etc.
~ Assembly language
11. Pyretic Controller Platform
Switch API
Programmer API
(OpenFlow)
LBRouteMonitor FW
Switches
Apps
Runtime
(Pyretic)
Pyretic: Language & Platform
13. Pyretic Philosophy
• Provide high-level programming
abstractions
• Based on state-of-the-art PL techniques
• Implement w/ state-of-the-art systems tech
• Package for the networking community
• Focus on real world utility and cutting edge
features
13
14. Open Source and
Programmer-Friendly
• Embedded and Implemented in Python
– Rich support for dynamic policies
– Familiar constructs and environment
– Python library / module system
• Default 3-clause BSD license
• Active & growing developer community
– GitHub repository & wiki
– Video tutorials and documentation
15. Where Pyretic Fits
• Hard and soft-switches
– Network processors
– Reconfigurable pipelines
– Hardware accelerated features
– Programmable x86 dataplane
• APIs and Languages
• Traffic Monitoring/Sampling/QoS
• Testing, Debugging, Verification
• Experimentation & Testbeds
• Integration & Orchestration
– End hosts
– Legacy switches
– Middleboxes
– Network Services 15
• WAN, Enterprise, DC, Home
• Wireless & Optical
• Systems Challenges
– Scalability
– Fault-tolerance
– Consistency
– Upgradability
– Performance
• Security
• Virtualization
– of network
– of services
– of address space
21. Policy in OpenFlow
• Defining “policy” is complicated
– All rules in all switches
– Packet-in handlers
– Polling of counters
• Programming “policy” is error-prone
– Duplication between rules and handlers
– Frequent changes in policy (e.g., flowmods)
– Policy changes affect packets in flight
21
22. From Rules to a Policy Function
• Located packet
– A packet and its location (switch and port)
• Policy function
– From located packet to set of located packets
• Examples
– Original packet: identity
– Drop the packet: drop
– Modified header: modify(f=v)
– New location: fwd(a)
22
24. Why a set? Consider flood
To flood every packet:
from pyretic.lib.corelib import *
def main():
return flood()
24
25. No worrying about:
• Writing a separate handler for controller
• Constructing and sending OpenFlow rules
• Keeping track of each switch
• Whether all switches supports the
OpenFlow “flood” action (portability!)
25
27. From Bit Patterns to Predicates
• OpenFlow: bit patterns
– No direct way to specify dstip!=10.0.0.1
– Requires two prioritized bitmatches
• Higher priority: dstip=10.0.0.1
• Lower priority: *
• Pyretic: boolean predicates
– Combined with &, |, and ~
– E.g., ~match(dstip=10.0.0.1)
27
30. OpenFlow Packets
• Location specific code needs both
– The packet
– The OF packet_in message*
• Tagging packets requires choosing
– Header field (e.g., VLAN, MPLS, TOS bits)
– Set of values
Neither concise nor portable
30* https://github.com/noxrepo/pox/tree/betta/pox/misc/of_tutorial.py
32. Runtime Handles Implementation
• Compiler chooses
– What set of header bits to use
– What each value means
• Conceptual – no tying to particular mech.
• Portable – different hw, other modules
• Efficient
– Don‟t need same vlan value network-wide
– Sometimes tags will „factor out‟
32
38. Parallel ‘|’: Do both C1 and C2 simultaneously
Sequential ‘>>’: First do C1 and then do C2
38
Policy Functions Enable
Compositional Operators
39. Example: Sequential Composition
To first apply access control and then flood
access_control() >> flood()
• Two totally independent policies
• Combined w/o any change to either
39
40. E.g., the „if_‟ policy
def if_(F,P1,P2):
return (F >> P1) + (~F >> P2)
40
Easily Define New Policies
41. Or flood
xfwd(a) = ~match(inport=a) >> fwd(a)
flood =
parallel([match(switch=sw) >>
parallel(map(xfwd,ports))
for sw,ports in MST])
• Portable: compiler chooses whether to
implement using OpenFlow „flood‟ or „forward‟
41
43. Querying In OpenFlow
• Pre-install rules at the needed granularity
(so byte/packet counters are available)
• Issue queries to poll these counters
• Parse the responses when they arrive
• Combine counter values from multiple rules
• Keep track of any packets sent to controller
43
44. Queries in Pyretic
• Just a special type of policy
• Forwarding to a “bucket”
– q = packets(limit=1,group_by=['srcip'])
• Used like any other (>>, +)
• w/ Callback function registration
– q.register_callback(printer)
44
46. Query Policy Library
46
Syntax Side Effect
packets(
limit=n,
group_by=[f1,f2,…])
callback on every packet
received for up to n packets
identical on fields f1,f2,...
count_packets(
interval=t,
group_by=[f1,f2,…])
count every packet received
callback every t seconds
providing count for each group
count_bytes(
interval=t,
group_by=[f1,f2,…])
count bytes received
callback every t seconds
providing count for each group
51. Functions Support Dynamism
• Dynamic policies are also functions of time
• Value at current moment in self.policy
• Reassigning updates dynamic policy
• Can be driven by queries
51
52. Update current val Otherwise, policy unchanged
Example: MAC-Learning
class learn(DynamicPolicy):
def init(self):
q = packets(limit=1,[’srcmac’,’switch’])
q.register_callback(update)
self.policy = flood() + q
def update(self,pkt):
self.policy = if_(match(dstmac=pkt[’srcmac’],
switch=pkt[’switch’]),
fwd(pkt[’inport’]), 52
First packet with unique
srcmac, switch
Defined momentarily
to flood
If newly learned MAC
New Type of Dynamic Policy
and query
Initialize current
value of time series
Forward directly to
learned port
56. OpenFlow Topology-Dependence
• Application written for single switch cannot
easily be ported to run over a distributed
collection of switches
• Or be made to share switch hardware with
other packet-processing applications.
56
57. Pyretic Topology Abstraction
• Built as an application-level library
• On top of other features introduced
• Examples provided for
– Merging
– Splitting
– Refining
57
58. • Simplest of topology abstraction examples
• Build a distributed middlebox
by running centralized middlebox app on V!
Topology Abstraction: Merge
58
V
S T
58
1
1 2
2
12
61. Switches
OF Controller Platform
OF Controller Client
Pyretic Backend
Pyretic Runtime
Main App 1 App 2
OpenFlow Messages
Serialized Messages (TCP
socket)
Controller
Platform Architecture
Process 1
Process 2
62. Modes of Operation
• Interpreted (on controller)
– Fallback when rules haven‟t been installed
– Good for debugging
• Reactive
– Rules installed in response to packets seen
– Fallback when proactive isn‟t feasible
– Various flavors and optimizations
• Proactive
– Analyze policy and push rules
– Computation can be an issue
– Currently working out kinks 62
63. HA &
Scale Out
Interpreter, QoS, ACL, Topo
Runtime Architecture (in Progress)
Flow Table Cache Management
Consistent Update
Incrementalization
Classifier Generation
Query
Mgmt
64. Also in Progress
• New features
– Policy access controls
– QoS operators
– Enhanced querying library
• Applications
– Incorporation of RADIUS & DHCP services
– Wide-area traffic-management solutions for
ISPs at SDN-enabled Internet Exchange
Points.
64
65. Don‟t Just Take Our Word
• 2013 NSDI Community Award Winner
• Featured in Coursera SDN MOOC
(~50K students, over 1K did assignments)
• Georgia Tech Resonance Reimplementation
– Approximately one programmer-day
– Six-fold reduction in code size (prev in NOX)
– Short expressions replaced complex code
• Matching packets
• Modifying and injecting packets
• Constructing and installing OpenFlow rules
– Added new features too complex for NOX
65
Today, I’m going to show you how to build sophisticated SDN appsOut of independent modules written in a novel domain specific language embedded in PythonAvailable for you to download at the URL behind me
The API that has made this possible is OpenFlowWhich I’ll describe in the context of a simple network (which we will return to later)Made of an SDN switch connected on port 1 to the internetAnd on ports 2 and 3 to servers A and B respectively
Let’s consider a simple OpenFlow IP routingprogram shown in the green boxEach line or “rule” has a pattern – that determines which packets will be matched – here destination IPSAn associated action – here forward out a particular portA priority used to disambiguate between overlapping patterns,
A different choice of pattern use MAC addresses instead of IPs, with the same relative priorities (we’ll assume our rules are listed in priority order from here on) produces an ethernet switching program.
Likewise, we can produce a load balancer by choosing a more complex pattern match and a different action – modifyIn this wayOpenFlow lets us manage all sorts ofheterogenous hardware / functionality using same interface!
hub.py
This isn’t a trivial challenge.Consider the balance and route snippets we wrote earlierHow can we combine them in sequence, first balancing then routing?If we place our balancing rules at higher priority than our routing rules, we balance w/o routingIf we place our routing rules at higher priority than our balancing rules, we route w/o balancingIn fact there’s no way to produce the desired policy without writing a new set of rules.No platform I’m aware of offers a general way to code even this simple instance of modularity.But our Pyretic system does this, and far more.
From ICFP’11 and POPL’12
From ICFP’11 and POPL’12
As I’ve mentioned Pyretic provides two composition operatorsParallel composition in which multiple policies run simultaneouslyAnd sequential composition in which one policy runs on the output of the previousEncoding policies as functions makes it easy to specify precisely the behavior of each operatorWhich lets us write the routing component of our load balancer – the same as the one we saw in the introBut by using parallel composition and logical predicates, we no longer need to worry about expressing behavior indirectly through priority orderings!
access_control.py
Which let’s us finish our first example, the monitoring load balancer.We’ve already written the route and monitor modulesThe balance module is similar to the OpenFlow version we saw previously, but with a third case for packets not destined to PTo these we apply the identity policy, so such packets are passed through to the next module unchanged.With these three modules, completing the monitoring load balancer takes one quick lineBalance then forward, and do monitoring in parallel This is significantly more elegant, modifiable, and reusable than writing the corresponding app in any controller platform.
I’ll demonstrate this in the context of a MAC-Learning moduleFor those of you who aren’t familiar MAC-Learning logic works by initially flooding all received packets.as mappings between ports and MAC addresses are learnt by each switch, that switch switches from flooding to unicastingWe start by declaring a new standard Python class which will encapsulate our time series objectWe then initialize the object by creating a bucket which for each switch will callback when a new srcmac is seenWe register an update function with that bucketAnd initialize the current value of the time-series to flood all packetsThe update function simply takes the previous value the time-series produces a new one that replaces a flood with a unicast.All-in-all pretty compact, yet very flexible.
Now I can show you how Pyretic supports topology abstractionIn the context of a simple “one-big-switch” transform in which we create one abstract switch out of our underlying network, the big switch’s ports – here V1 and V2 - are the boundary ports of the underlying network – here ports S1 and T2 respectively This is a useful abstraction as it enables the programmer to build distributed software middlebox using centralized programming model (i.e., run a single-switch SDN middlebox on the one-big-switch), and it all just works!