SlideShare une entreprise Scribd logo
1  sur  9
Télécharger pour lire hors ligne
OpenFlow as a Service
Fred Hsu, M. Salman Malik, Soudeh Ghorbani
{fredhsu2,mmalik10,ghorban2}@illinois.edu
Abstract—By providing a logically centralized
controller that runs the management applica-
tions that directly control the packet-handling
functionality in the underlying switches, newly
introduced paradigm of Software Defined Net-
working (SDN) paves the way for network man-
agement. Although there has been an extensive
excitement in the networking community about
SDN and OpenFlow which has led to various
proposals for building OpenFlow controllers such
as NOX and FloodLight [1], [2]; and despite
recent advances in cloud computing that have
resulted in development of reliable open source
tools for managing clouds like OpenStack (an
extensively used open source Infrastructure as a
Service (IaaS) cloud computing project [3]), these
two parts have not been integrated yet. In this
work, we plan to bridge this gap, by providing
a scalable OpenFlow controller as a plugin for
OpenStack’s “network connectivity as a service”
project (Quantum [4]) that avoids a considerable
shortcomings of its currently available OpenFlow
controller: Scalability.
I. INTRODUCTION
Cloud computing is rapidly increasing in
popularity [5]. Elasticity and dynamic service
provisioning offered by cloud has attracted a
lot of attention. The pay-as-you-go model has
effectively turned cloud computing into a utility,
and has made it accessible even to startups with
limited budget. Given the large monetary bene-
fits that cloud computing has to offer, more and
more corporates are now migrating to cloud.
It is not only industry. Cloud has received great
attention from researchers as it poses many in-
teresting challenges [6]. Innovation in the cloud
has become easier with the advent of Open-
Stack project [3]. OpenStack is an open source
project that enables anyone to run and manage a
production or experimental cloud infrastructure.
It is a powerful arhcitecuture that can be used
to provide Infrastructure as a Service (IaaS) to
users.
Traditionally, OpenStack has comprised of in-
stance management project (Nova), object stor-
age project (Swift) and image repository project
(Glance). Previously, networking has not re-
ceived much attention of OpenStack community
and the network management responsibility was
delegated to Nova services, which can provide
flat network configuration or VLAN segmented
networks [7]. This basic networking capability
on one hand makes it difficult for tenants to
setup multi-tier networks (in flat networking
mode) and suffers from scalability issues (in
VLAN mode) on the other hand [8].
Fortunately, OpenStack community has been
cognizant of these limitations and has taken an
initiative to enhance the networking capabilities
of OpenStack. The new OpenStack project,
Quantum [4], is designed to provide “network
connectivity as a service” between interface
devices (e.g., vNICs) managed by other Open-
stack services (e.g., Nova [9]) [4] (See Sec-
tion II for details). Essentially, Quantum would
enable the tenants to create virtual networks
with great ease. The modular architecture and
standardized API can be leveraged to provide
plugins for firewalls and ACLs etc. [4]. Even
in this short span of time, since Quantum’s
inception, multiple plugins have been developed
to work with Quantum service. The one partic-
ularly relevant to this work is a plugin for an
OpenFlow Controller called Ryu [10].
Although Ryu project seems to be an at-
tempt to integrate advantages of OpenFlow
with OpenStack, it suffers from lack of a
very fundamental requirement of cloud com-
puting infrastructure: Scalability (more details
in Section III). In this work we try to address
this shortcoming by providing a more scalable
OpenFlow plugin for Quantum project. More-
over, as a proof of concept of management
applications that largely benefit from being
run by a logically-centralized controller, we
demonstrate the promising performance of an
application for virtual machine migration that
could run on top of our controller.
The rest of the paper is organized as follows:
In Section II, we provide a brief overview of
OpenStack project and its different parts. In
Section III, we present our approach for ad-
dressing the shortcomings of already available
OpenFlow controller plugin and current status
of the project. In Section V we explain the VM
migration application and present its results.
Finally, we provide related work in VI. The
paper concludes in VII.
II. BACKGROUND
Since we are extensively using OpenStack,
Nova, Quantum and Open vSwitch, we provide
a brief overview of them in this section
A. OpenStack
OpenStack is an open source cloud manage-
ment system (CMS) [8]. It is comprised of five
core projects namely Nova, Swift, Glance, key-
stone and Horizon. Quantum is another project
that will be added to the core of upcoming
releases of OpenStack. Before the introduction
of Quantum, the networking functionality was
the responsibility of Nova (which was mainly
designed to provide instances on demand).
B. Nova
As pointed out earlier, the primary responsi-
bility of Nova was to provide a tenant facing
API that a tenant can use to request new in-
stances in the infrastructure. These requests are
then channeled by the nova-api to the Asyn-
chronous Message Passing Queue (AMPQ) to
the scheduler which in turn assigns the task
of instantiating VM to one of the compute-
workers. Not only this, Nova was also responsi-
ble for setting up the network configuration of
these instances. The three modes of networking
provided by Nova are: Flat networking, Flat
DHCP and VLAN networking. Cloud operator
can select one of them by choosing the appro-
priate networking manager [7].
C. Quantum
Quantum is a “virtual network service” that
aims to provide a powerful API to define the
network connectivity between interface devices
implemented by other OpenStack services (e.g.,
vNICs from Nova virtual servers) [4].
It provides many advantages for cloud tenants
by providing them with an API to build rich
networking topologies, such as creating multi-
tier web application topologies, and configuring
advanced network capabilities in the cloud like
providing end-to-end QoS guarantees. More-
over, it provides a greatly extensible framework
for building different plugins. This has facili-
tated the development of some highly utilized
plugins like Open vSwitch or Nicira Network
Virtualization Platform (NVP) Plugin [11], [12].
Originally, Quantum is focused on providing L2
connectivity between interfaces. However, its
extensibility provides an avenue for innovation
by allowing new functionality to be provided
via plugins. We leverage this chance to develop
a “controller” as one of such plugins. Generally,
the role of a Quantum plugin is to translate
logical network modifications received from the
Quantum Service API and map them to specific
operations on the switching infrastructure. Plu-
gins are able to expose advanced capabilities
beyond L2 connectivity using API extensions.
D. Open vSwitch
Open vSwitch (OVS) is a software switch
that resides in the hypervisor and can provide
connectivity between the guests that reside in
that hypervisor. It is also capable of speaking
to an OpenFlow controller, that can be locally or
remotely located on another host. OVS is handy
as it allows the network state associated with
a VM to be transfered along with the VM on
its migration and thus reduces the configuration
burden of operators.
E. Floodlight
Floodlight [2] is a Java based OpenFlow
controller that has forked from one of the two
pioneering OpenFlow controllers (the second
one being NOX) developed at Stanford called
Beacon controller [13]. Our choice of using
2
Floodlight is attributed to the simplicity and
yet high performance of the controller but we
believe that other controllers [14] can serve as
reasonably good alternatives.
F. Ryu
The closest work related to our work is Ryu
which is an open-sourced Network Operating
System for OpenStack project which provides
a logically centralized control and an API that
makes it easy for operators to create new
network management and control applications.
It supports OpenFlow protocol to modify the
behavior of network devices [10].
Ryu manages the network L2 segregation of
tenants without using VLAN. It creates in-
dividual flows for inter-VM communications
and it has been shown in the literature that
such approaches do not scale to data center
networks since they exhaust switch memory
quite fast [15], [16].
III. OUR APPROACH
We provide an OpenFlow Plugin for Quan-
tum that leverages Floodlight controller to pro-
vide better scalability. Among different Open-
Flow Controllers, we decided to use FloodLight
which is designed to be an enterprise grade
and high performance controller [2]. Although
we provide our plugin using FloodLight as the
proof of concept, we believe that it should be
easy to extend our approach for other standard
OpenFlow controllers, in case the providers and
tenants of data centers prefer to deploy other
controllers. We leave detailed explanation of ap-
plicability of our approach to other controllers
for future work.
Our Plugin takes requests from the Quantum
API for creation, updating, and deletion of
network resources and implements them on the
underlying network. In addition to the plugin,
an agent is loaded on each Nova VM, that
handles the creation of Virtual Interfaces for the
VM and attaches them to the network provided
by Quantum. Our solution will leverage Open
vSwitch as an OpenFlow based virtual switch
to provide the underlying network to Quantum,
and configure the vSwitch via the Floodlight
OpenFlow controller.
A. Challenges
The main challenge for providing OpenFlow
controllers for Quanum is scalability. The Ryu
plugin that currently exists takes the approach
of creating flows for all inter-VM traffic. This
will not scale as the number of flows ex-
ceeds the Ternary Content Addressable Memory
(TCAM) capabilities of the OpenFlow switches.
A detailed discussion of such scalability is-
sues of OpenFlow is provided in [15] where
the authors show that the required number of
flow entries in data centers and high perfor-
mance networks (where an average ToR switch
might have roughly 78,000 flow rules if the
rule timeout is 60 second) exceeds the TCAM
memory of commodity switches used for Open-
Flow rules (they claim a typical model supports
around 1,500 OpenFlow rules).
As an alternative approach, we implement
tenant network separation with VLANs, which
allows for a more scalable solution. We ac-
knowledge that VLANs also have scaling limi-
tations. Hence, a possible extension of our work
would be to use some form of encapsulation to
scale even further.
B. Architecture
Our Quantum plugin is responsible for taking
network creation requests and translating the
network ID given by Quantum to a VLAN,
and storing these translations in a database. The
plugin handles the creation of an Open vSwitch
bridge and keeps track of the logical network
model. The agent and plugin keep track of
the interfaces that are plugged into the virtual
network, and contact Floodlight for new traffic
that enters the network. Traffic is tagged with a
VLAN ID by OpenFlow and Floodlight, based
on the network that the port is assigned to and
port source MAC address. Once tagged, the
network traffic is forwarded using a learning
switch configured on Floodlight to control the
vSwitch. As a result we will have VM traffic
isolated on a per tenant basis through VLAN
tagging and OpenFlow control.
Figure 1 shows the architecture of our plu-
gin. As shown, tenants pass on the commands
3
Fig. 1. Plugin Architecture.
to Quantum manager using nova-client. Quan-
tum manager relays these calls to the Flood-
light plugin that actually implements the cre-
ate/read/update/delete (CRUD) functionalites.
The plugin realizes these functions by creating
a mapping between each tenant’s network ID
and VLAN ID, which is stored in the database.
Whenever there is a new port attached to the
quantum network, the plugin adds a correspod-
ing port to the OVS-bridge and stores mapping
between the port, and VLAN ID in the database.
Finally, quantum agent which runs as a daemon
on each hypervisor keeps polling the database
and OVS bridge for any changes and when
any change is observed it is communicated to
Floodlight client. This client then uses REST-
ful API to talk with the Floodlight controller
module. This way controller would know about
the port,network ID and VLAN ID mappings.
Whenever a new packet arrives at the OVS for
which it does not have any entry, the packet
is sent to the controller for decision. Controller
then pushes the rules in the OVS telling it which
VLAN ID to use to tag the packets as well
as to encapsulate the packet with physical host
addresses. Furthermore, controller will also add
entry to each physical switch with the action to
pass the packet through normal packet process-
ing pipeline, and the packet would be forwarded
based on simple learning switch mechanism.
Thus, number of entries in TCAM of each
physical switch would be directly proportional
to the distinct VLANs that pass through that
switch.
IV. ANALYSIS
In this section we provide an analyis of
our approach compared to Ryu in terms of
number of flow entries that we can expect to
see at the switches. Like Tavakoli et al. [17],
we assume that there are 20 VMs per server
each haveing 10 (5 incoming and 5 outgoing)
concurrent flows. In such a set up, a VM-VM
flow based approach like Ryu will not scale.
Figure 2 shows comparison between the Ryu
and our approach. Here we calculate the number
of flow table entries based on the number of
fields of flow matching rules that are specified
(unspecified fields are wildcarded) when push-
ing such rules in the OpenFlow switches. In
case of Ryu the match rules are based on the
source and destination MAC addresses of the
VMs (assuming rest of the fields as wildcar)
and hence the top of rack (ToR) switch would
contain 20 servers/rack x 20 VMs/server x 10
concurrent flows/VM = 4000 entries to be kept
in ToR’s TCAM. Whereas in our approach we
are aggregating the flow entries based on the
VLAN tag of the packet i.e., we will have one
matching rule for each VM, at the physical
switches (this is the worst case scenario that
assumes each VM on the server belongs to a
different tenant). Thus, number of flow table
entries that we need to store in ToR’s TCAMs
is 10 times lesser than that of Ryu.
V. AN EXAMPLE OF MANAGEMENT
APPLICATIONS: VM MIGRATION
OpenFlow and our plugin (as a realisation of
it for OpenStack) could simplify operating man-
agement operations by providing global view of
the network and direct control over forwarding
behavior. In this section, we provide an example
4
Fig. 2. Comparison of expected flow table entries.
of such applications: VM migration application,
an application that is in charge of migrating
virtual machines of tenants. We explain why
such operation is useful, what the challenges
are, and how our plugin can help VM migration
by presenting some results.
New advances in technologies for high-speed
and seamless migration of VMs turns VM mi-
gration into a promising and efficient means
for load balancing, configuration, power saving,
attaining a better resource utilization by real-
locating VMs, cost management, etc. in data
centers [18]. Despite these numerous benefits,
VM migration is still a challenging task for
providers, since moving VMs requires update
of network state, which consequently could lead
to inconsistencies, outages, creation of loops
and violations of service level (SLA) agreement
requirements [19]. Many applications today like
financial services, social networking, recom-
mendation systems, and web search cannot tol-
erate such problems or degradation of service
[15], [20].
On the positive side, SDN provides a power-
ful tool for tackling these challenging problems:
The ability to run algorithms in a logically
centralized location, and precisely manipulate
the forwarding layer of switches creates a new
opportunity for transitioning the network be-
tween two states.
In particular, in this section, we seek the an-
swer of the following question: given a starting
network, and a goal network, each consisting of
a set of switches each with a set of forwarding
rules, can we come up with a sequence of Open-
Flow instructions, to manipulate the starting
network into the goal network, while preserving
desired correctness conditions such as freedom
of loops, and bandwidth guarantees? This prob-
lem boils down to solving the following two
sub-problems: determining the ordering of VM
migrations, or the sequence planning; and for
each VM that should be migrated, determin-
ing the ordering of OpenFlow instructions that
should be installed or discarded.
To perform the transition while preserving
correctness guarantees, we test the performance
of the optimal algorithm, i.e., the algorithms
that, among all the possible orderings for per-
forming the migrations, determines the ordering
that results in the minimum number of viola-
tions. In particular, given the network topology,
SLA requirements, and set of VMs that are to
be migrated along with their new locations; this
algorithm outputs an ordered sequence of VMs
to migrate, and a set of forwarding state changes
1
. This algorithm runs in the SDN controller to
orchestrate these changes within the network.
To evaluate its performance of our design, we
simulated its performance using realistic data
center and virtual network topologies (as will
be explained later). We find that, for a wide
spectrum of workloads, this algorithm signif-
icantly improves the performance of randomly
ordering the migrations (up to 80%), in terms of
the number of VMs that it can migrate without
violating SLAs 2
.
Allocating virtual networks on a shared phys-
ical data center has been extensively studied
before [21]–[23]. Both for the physical under-
lying network and for the VNs, we borrow the
topologies and settings used in these works.
More specifically, for underlying topology, we
test the algorithms for random graph, tree, fat-
tree, D-Cell and B-Cube. For VNs, we use star,
tree, 3-tier graph which is common for web
service applications [21]–[23]. Furthermore, for
initially allocating VNs before migrations, we
1We leave improving this algorithm like optimizing it to
run faster to future work.
2For preserving SLA requirements while migrating, as
the proof of concept, in this work we focus on avoiding
bandwidth violation.
5
use SecondNet’s algorithm [22], because of its
low time complexity, achieving high utilization
and support of arbitrary topologies.
We select random virtual nodes to migrate,
and pick their destinations from set of all sub-
strate nodes with free capacity randomly. We
acknowledge that diverse scenarios for which
migrations are performed might require differ-
ent mechanisms for node or destination selec-
tions and such selections might impact the per-
formance of algorithms. We leave exploration
of such mechanisms and the performance of our
heuristic over them to future work.
Our experiments are performed on a Intel
Core i7-2600K machine with 16GB memory.
Figure 3 depicts shows the results for 10
round of experiments over a 200-node tree
where each substrate node has capacity 2, sub-
strate links have bandwidth 500MB, and VNs
are in form of 9-node trees with links with
10MB bandwidth requirement. As it shows, the
fraction of violations remains under 30% with
applying optimal algorithm, while it can get
close to 100% with some random ordering. 3
.
Fig. 3. Fraction of migrations that would lead to
violation of link capacities with different algorithms.
VI. RELATED WORK
In the following, we review the approaches
taken to scale the multi-tenant cloud network.
3Rising trends in fraction of violations for random plan-
ning is due to the fact that as the number of migrations
increases, more and more VMs will be replaced from
the original place specified by the allocation algorithm.
This sub-optimal allocation of nodes of VNs makes the
feasibility of a random migration less likely, e.g., it is
more likely to encounter violation while migrating the 10th
VM than the 1th. It is interesting to note that even with
quite large number of migrations, the fraction of violations
encountered by optimal soultion remains almost constant.
These are relevant since any of the following
approaches could be incorporated as an
underlying mechanism of communication for
Quantum plugin. Thus an understanding of
pros and cons of them is useful for plugin
design.
Traditionally, VLANs (IEEE 802.1q) have
been used as a mechanism for providing
isolation between different tiers of a multi-tier
applications and amongst different tenants
and organizations that coexist in the cloud.
Although, VLANs overcome the problem of L2
network by dividing the network into isolated
broadcast domains, it still doesn’t enable
agility of services; Number of hosts that a
given VLAN can incorporate is still limited to
few thousand hosts. Thus as [24] reports, any
service that needs to expand will have to be
accommodated in a different VLAN than where
rest of its servers are hosted and hence leading
to fragmentation of service. Furthermore,
VLAN configuration is highly error prone
and difficult to manage, if done manually.
Although, it is possible to configure both the
access ports and trunk ports automatically with
the help of VLAN management policy server
(VMPS) and VLAN trunking protocol (VTP)
respectively, it is undesirable to do the latter
because in this case network admins have to
divide switches into VTP domains and each
switch in a given domain has to participate
in all the VLANs within that domain, which
leads to unnecssary overhead (see [25] for
further discussion). Additionally, since VLAN
header provides only a 12-bit VLAN ID, we
can have at most 4096 VLANs in the network.
This is relatively low when considering the
multipurpose use of VLANs. Virtualization
in data centers has further exacerbated the
situation as more segments need to be created.
Virtual eXtensible LANs (VXLANs [26]) is a
recent technology that is being standardized
under IETF. VXLAN aims to eliminate the
limitations of VLAN by introducing a 24-bit
VLAN network identifier (VNI), which means
that with VXLAN it would be possible to have
16M VLANs in the network. VXLAN makes
use of Virtual Tunnel Endpoints (VTEPs)
6
Fig. 4. Overview of VXLAN [27].
Fig. 5. VXLAN unknown unicast handling [27].
that lie in the software switch of hypervisors
(or can be placed with access switches) and
encapsulate packets with the VM’s associated
VNI (see Figure 4). VTEPs use Internet Group
Management Protocol (IGMP) to join multicast
groups(Figure 5). This helps in eliminating the
unknown unicast flood, which is now only send
to VTEPs in the multicast group of sender’s
VNI.
Limitations: Since there can be 16M VLANS,
which exceeds the maximum number of
multicast groups, it is possible to have many
VLANs, belonging to different VNIs, to
share the same multicast group [27]. This
is problematic both in terms of security and
performance.
TRansparent Interconnection of Lots of
Links (TRILL) [28] is another interesting
concept that is being standardized under IETF.
It runs IS-IS routing protocol between bridges
(called RBridges) so that each RBridge is
aware of the network topology. Furthermore,
a nickname acquisition protocol is run among
RBridges so that each RBridge can identify
others. When an ingress bridge receives a
packet, it encapsulates the packet in a TRILL
header that conists of the ingress switch’s
nickname and egress switch’s nickname and
an additional source and destination MAC
addresses. These MAC addresses are swapped
on each hop (just as done by routers). RBridges
at the edge learn source MAC addresses of
hosts for each incoming packet they encapsulate
as an ingress switch and the MAC address of
hosts for every packet they decapsulate as an
egress switch. If a destination MAC address
is unknown to ingress switch, that packet is
then sent to all the switches for discovery.
Furthermore, TRILL uses a hop count field
in the header which is decremented on each
hop by the Rbriges and hence prevents any
forwarding loop.
Limitations: Although TRILL and RBridges
overcome STP’s limitations, they are not
designed for scalability [29] [30]. Furthermore,
since the TRILL header contains a hop count
field that needs to be decremented at each
hop, Frame Check Sequence (FCS) also needs
to be calculated at each hop, which may in
turn effect the forwarding performance of
switches [31].
VII. CONCLUSION
We discussed the benefits of using OpenFlow
for cloud computing, described an open source
project for cloud management, and explained
why its available OpenFlow plugin does not
provide an acceptable level of scalability. We
layed out our design for an alternative plugin
for Quantum that would side step the scalability
issue of the current OpenFlow plugin. We also
gave one instance of a data center management
application (VM migration) that could benefit
from our OpenFlow plugin for performing its
task and tested its performance. Finally, we
provided hints about possible future work in this
direction.
REFERENCES
[1] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado,
N. McKeown, and S. Shenker, “NOX: towards an
operating system for networks,” ACM SIGCOMM
Computer Communication Review, vol. 38, no. 3, pp.
105–110, 2008.
[2] “FloodLight OpenFlow Controller,” http://floodlight.
openflowhub.org/.
[3] “Open source software for building private and public
clouds.” http://openstack.org/.
7
[4] “Openstack - quantum wiki,” http://wiki.openstack.
org/Quantum.
[5] J. Cappos, I. Beschastnikh, A. Krishnamurthy, and
T. Anderson, “Seattle: a platform for educational
cloud computing,” in ACM SIGCSE Bulletin, vol. 41,
no. 1. ACM, 2009, pp. 111–115.
[6] Y. Vigfusson and G. Chockler, “Clouds at the cross-
roads: research perspectives,” Crossroads, vol. 16,
no. 3, pp. 10–13, 2010.
[7] “Openstack compute administration manual - cactus,”
http://docs.openstack.org/cactus/openstack-compute/
admin/content/networking-options.html.
[8] “Openstack , quantum and open vswitch part
i,” http://openvswitch.org/openstack/2011/07/25/
openstack-quantum-and-open-vswitch-part-1/.
[9] “Novas documentation,” http://nova.openstack.org/.
[10] “Ryu network operating system as openflow
controller,” http://www.osrg.net/ryu/using with
openstack.html.
[11] “Open vSwitch: Production Quality, Multilayer Open
Virtual Switch,” http://openvswitch.org/.
[12] “Nicira Networks,” http://nicira.com/.
[13] “Beacon Home,” https://openflow.stanford.edu/
display/Beacon/Home.
[14] “List of OpenFlow Software Projects,” http://yuba.
stanford.edu/∼casado/of-sw.html.
[15] A. Curtis, J. Mogul, J. Tourrilhes, P. Yalagandula,
P. Sharma, and S. Banerjee, “Devoflow: Scaling flow
management for high-performance networks,” in ACM
SIGCOMM, 2011.
[16] A. Curtis, W. Kim, and P. Yalagandula, “Mahout:
Low-overhead datacenter traffic management using
end-host-based elephant detection,” in INFOCOM,
2011 Proceedings IEEE. IEEE, 2011, pp. 1629–1637.
[17] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker,
“Applying nox to the datacenter,” Proc. HotNets (Oc-
tober 2009), 2009.
[18] V. Shrivastava, P. Zerfos, K. won Lee, H. Jamjoom,
Y.-H. Liu, and S. Banerjee, “Application-aware virtual
machine migration in data centers,” in Infocom, 2011.
[19] M. Reitblatt, N. Foster, J. Rexford, and D. Walker,
“Consistent updates for software-defined networks:
Change you can believe in!”,” in HotNets, 2011.
[20] M. Lee, S. Goldberg, R. R. Kompella, and G. Vargh-
ese, “Fine-grained latency and loss measurements in
the presence of reordering,” in SIGMETRICS, 2011.
[21] Y. Zhu and M. H. Ammar, “Algorithms for assigning
substrate network resources to virtual network com-
ponents,” in INFOCOM, 2006.
[22] C. Guo, G. Lu, H. J. Wang, S. Yang, C. Kong, P. Sun,
W. Wu, and Y. Zhang, “SecondNet: A data center
network virtualization architecture with bandwidth
guarantees,” ser. CoNEXT, 2010.
[23] H. Ballani, P. Costa, T. Karagiannis, and A. I. T.
Rowstron, “Towards predictable datacenter networks,”
in SIGCOMM, 2011.
[24] A. Greenberg, J. Hamilton, N. Jain, S. Kandula,
C. Kim, P. Lahiri, D. Maltz, P. Patel, and S. Sengupta,
“Vl2: a scalable and flexible data center network,”
ACM SIGCOMM Computer Communication Review,
vol. 39, no. 4, pp. 51–62, 2009.
[25] M. Yu, J. Rexford, X. Sun, S. Rao, and N. Feamster,
“A survey of virtual lan usage in campus networks,”
Fig. 6. Snippet of code from OVS base class that we
leveraged.
Communications Magazine, IEEE, vol. 49, no. 7, pp.
98–103, 2011.
[26] “VXLAN: A Framework for Overlaying Virtualized
Layer 2 Networks over Layer 3 Networks,” tools.ietf.
org/html/draft-mahalingam-dutt-dcops-vxlan-00/.
[27] http://blogs.cisco.com/datacenter/
digging-deeper-into-vxlan/l.
[28] “Routing Bridges (RBridges): Base Protocol Specifi-
cation,” http://tools.ietf.org/html/rfc6325.
[29] C. Tu, “Cloud-scale data center network architecture,”
2011.
[30] “Transparent Interconnection of Lots of Links
(TRILL): Problem and Applicability Statement,”
tools.ietf.org/html/rfc5556.
[31] R. Niranjan Mysore, A. Pamboris, N. Farrington,
N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya,
and A. Vahdat, “Portland: a scalable fault-tolerant
layer 2 data center network fabric,” in ACM SIG-
COMM Computer Communication Review, vol. 39,
no. 4. ACM, 2009, pp. 39–50.
8
Fig. 7. Snippet of the agent daemon that polls database
and updates controller.
9

Contenu connexe

Tendances

Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualizationSDN Hub
 
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015Dmitri Kalintsev
 
Generalized Virtual Networking, an enabler for Service Centric Networking and...
Generalized Virtual Networking, an enabler for Service Centric Networking and...Generalized Virtual Networking, an enabler for Service Centric Networking and...
Generalized Virtual Networking, an enabler for Service Centric Networking and...Stefano Salsano
 
Network and Service Virtualization tutorial at ONUG Spring 2015
Network and Service Virtualization tutorial at ONUG Spring 2015Network and Service Virtualization tutorial at ONUG Spring 2015
Network and Service Virtualization tutorial at ONUG Spring 2015SDN Hub
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowAniekan Akpaffiong
 
Floodlight OpenFlow Controller Overview
Floodlight OpenFlow Controller OverviewFloodlight OpenFlow Controller Overview
Floodlight OpenFlow Controller Overviewmscohen02
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerRahul Krishna Upadhyaya
 
Network Virtualization: Delivering on the Promises of SDN
Network Virtualization: Delivering on the Promises of SDNNetwork Virtualization: Delivering on the Promises of SDN
Network Virtualization: Delivering on the Promises of SDNOpen Networking Summits
 
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SAMeh Zaghloul
 
MidoNet gives OpenStack Neutron a Boost
MidoNet gives OpenStack Neutron a BoostMidoNet gives OpenStack Neutron a Boost
MidoNet gives OpenStack Neutron a BoostOpenStack_Online
 
Unified Underlay and Overlay SDNs for OpenStack Clouds
Unified Underlay and Overlay SDNs for OpenStack CloudsUnified Underlay and Overlay SDNs for OpenStack Clouds
Unified Underlay and Overlay SDNs for OpenStack CloudsPLUMgrid
 
OPNFV Service Function Chaining
OPNFV Service Function ChainingOPNFV Service Function Chaining
OPNFV Service Function ChainingOPNFV
 
VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld
 
SDN Fundamentals - short presentation
SDN Fundamentals -  short presentationSDN Fundamentals -  short presentation
SDN Fundamentals - short presentationAzhar Khuwaja
 
StratusLab: Darn Simple Cloud
StratusLab: Darn Simple CloudStratusLab: Darn Simple Cloud
StratusLab: Darn Simple Cloudstratuslab
 
Virt july-2013-meetup
Virt july-2013-meetupVirt july-2013-meetup
Virt july-2013-meetupnvirters
 
VMworld 2013: An Introduction to Network Virtualization
VMworld 2013: An Introduction to Network Virtualization VMworld 2013: An Introduction to Network Virtualization
VMworld 2013: An Introduction to Network Virtualization VMworld
 
Introduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylightIntroduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylightSDN Hub
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Dan Mihai Dumitriu
 

Tendances (20)

Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
 
Generalized Virtual Networking, an enabler for Service Centric Networking and...
Generalized Virtual Networking, an enabler for Service Centric Networking and...Generalized Virtual Networking, an enabler for Service Centric Networking and...
Generalized Virtual Networking, an enabler for Service Centric Networking and...
 
Network and Service Virtualization tutorial at ONUG Spring 2015
Network and Service Virtualization tutorial at ONUG Spring 2015Network and Service Virtualization tutorial at ONUG Spring 2015
Network and Service Virtualization tutorial at ONUG Spring 2015
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & How
 
Floodlight OpenFlow Controller Overview
Floodlight OpenFlow Controller OverviewFloodlight OpenFlow Controller Overview
Floodlight OpenFlow Controller Overview
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using docker
 
Network Virtualization: Delivering on the Promises of SDN
Network Virtualization: Delivering on the Promises of SDNNetwork Virtualization: Delivering on the Promises of SDN
Network Virtualization: Delivering on the Promises of SDN
 
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
 
MidoNet gives OpenStack Neutron a Boost
MidoNet gives OpenStack Neutron a BoostMidoNet gives OpenStack Neutron a Boost
MidoNet gives OpenStack Neutron a Boost
 
Unified Underlay and Overlay SDNs for OpenStack Clouds
Unified Underlay and Overlay SDNs for OpenStack CloudsUnified Underlay and Overlay SDNs for OpenStack Clouds
Unified Underlay and Overlay SDNs for OpenStack Clouds
 
OPNFV Service Function Chaining
OPNFV Service Function ChainingOPNFV Service Function Chaining
OPNFV Service Function Chaining
 
Sdn primer pdf
Sdn primer pdfSdn primer pdf
Sdn primer pdf
 
VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments
 
SDN Fundamentals - short presentation
SDN Fundamentals -  short presentationSDN Fundamentals -  short presentation
SDN Fundamentals - short presentation
 
StratusLab: Darn Simple Cloud
StratusLab: Darn Simple CloudStratusLab: Darn Simple Cloud
StratusLab: Darn Simple Cloud
 
Virt july-2013-meetup
Virt july-2013-meetupVirt july-2013-meetup
Virt july-2013-meetup
 
VMworld 2013: An Introduction to Network Virtualization
VMworld 2013: An Introduction to Network Virtualization VMworld 2013: An Introduction to Network Virtualization
VMworld 2013: An Introduction to Network Virtualization
 
Introduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylightIntroduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylight
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
 

Similaire à OpenFlow as a Service from research institute

SDN: A New Approach to Networking Technology
SDN: A New Approach to Networking TechnologySDN: A New Approach to Networking Technology
SDN: A New Approach to Networking TechnologyIRJET Journal
 
ONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperabilityONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperabilityPaul Stevens
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerAnanth Padmanabhan
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerSatya Sanjibani Routray
 
Openstack Global Meetup
Openstack Global Meetup Openstack Global Meetup
Openstack Global Meetup openstackindia
 
Quantum essex summary
Quantum essex summaryQuantum essex summary
Quantum essex summaryDan Wendlandt
 
Conference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5GConference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5GEricsson
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Scienceinventy
 
OpenStack - An Overview
OpenStack - An OverviewOpenStack - An Overview
OpenStack - An Overviewgraziol
 
MidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integrationMidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integrationAkhilesh Dhawan
 
Virtual Networks - A Perspective from a Cloud Connect 2010 Panel
Virtual Networks - A Perspective from a Cloud Connect 2010 PanelVirtual Networks - A Perspective from a Cloud Connect 2010 Panel
Virtual Networks - A Perspective from a Cloud Connect 2010 PanelRobert Grossman
 
Mastering OpenStack - Episode 05 - Controller Nodes
Mastering OpenStack - Episode 05 - Controller NodesMastering OpenStack - Episode 05 - Controller Nodes
Mastering OpenStack - Episode 05 - Controller NodesRoozbeh Shafiee
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablobabycat_feifei
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo锐 张
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and AutomationAdam Johnson
 
Active networking on a programmable networking platform
Active networking on a programmable networking platformActive networking on a programmable networking platform
Active networking on a programmable networking platformTal Lavian Ph.D.
 

Similaire à OpenFlow as a Service from research institute (20)

SDN: A New Approach to Networking Technology
SDN: A New Approach to Networking TechnologySDN: A New Approach to Networking Technology
SDN: A New Approach to Networking Technology
 
Sdn 02
Sdn 02Sdn 02
Sdn 02
 
OVS-LinuxCon 2013.pdf
OVS-LinuxCon 2013.pdfOVS-LinuxCon 2013.pdf
OVS-LinuxCon 2013.pdf
 
ONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperabilityONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperability
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using docker
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using docker
 
Openstack Global Meetup
Openstack Global Meetup Openstack Global Meetup
Openstack Global Meetup
 
Quantum essex summary
Quantum essex summaryQuantum essex summary
Quantum essex summary
 
DesignofSDNmanageableswitch.pdf
DesignofSDNmanageableswitch.pdfDesignofSDNmanageableswitch.pdf
DesignofSDNmanageableswitch.pdf
 
Conference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5GConference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5G
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
 
OpenStack - An Overview
OpenStack - An OverviewOpenStack - An Overview
OpenStack - An Overview
 
BuildingSDNmanageableswitch.pdf
BuildingSDNmanageableswitch.pdfBuildingSDNmanageableswitch.pdf
BuildingSDNmanageableswitch.pdf
 
MidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integrationMidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integration
 
Virtual Networks - A Perspective from a Cloud Connect 2010 Panel
Virtual Networks - A Perspective from a Cloud Connect 2010 PanelVirtual Networks - A Perspective from a Cloud Connect 2010 Panel
Virtual Networks - A Perspective from a Cloud Connect 2010 Panel
 
Mastering OpenStack - Episode 05 - Controller Nodes
Mastering OpenStack - Episode 05 - Controller NodesMastering OpenStack - Episode 05 - Controller Nodes
Mastering OpenStack - Episode 05 - Controller Nodes
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and Automation
 
Active networking on a programmable networking platform
Active networking on a programmable networking platformActive networking on a programmable networking platform
Active networking on a programmable networking platform
 

Dernier

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 

Dernier (20)

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 

OpenFlow as a Service from research institute

  • 1. OpenFlow as a Service Fred Hsu, M. Salman Malik, Soudeh Ghorbani {fredhsu2,mmalik10,ghorban2}@illinois.edu Abstract—By providing a logically centralized controller that runs the management applica- tions that directly control the packet-handling functionality in the underlying switches, newly introduced paradigm of Software Defined Net- working (SDN) paves the way for network man- agement. Although there has been an extensive excitement in the networking community about SDN and OpenFlow which has led to various proposals for building OpenFlow controllers such as NOX and FloodLight [1], [2]; and despite recent advances in cloud computing that have resulted in development of reliable open source tools for managing clouds like OpenStack (an extensively used open source Infrastructure as a Service (IaaS) cloud computing project [3]), these two parts have not been integrated yet. In this work, we plan to bridge this gap, by providing a scalable OpenFlow controller as a plugin for OpenStack’s “network connectivity as a service” project (Quantum [4]) that avoids a considerable shortcomings of its currently available OpenFlow controller: Scalability. I. INTRODUCTION Cloud computing is rapidly increasing in popularity [5]. Elasticity and dynamic service provisioning offered by cloud has attracted a lot of attention. The pay-as-you-go model has effectively turned cloud computing into a utility, and has made it accessible even to startups with limited budget. Given the large monetary bene- fits that cloud computing has to offer, more and more corporates are now migrating to cloud. It is not only industry. Cloud has received great attention from researchers as it poses many in- teresting challenges [6]. Innovation in the cloud has become easier with the advent of Open- Stack project [3]. OpenStack is an open source project that enables anyone to run and manage a production or experimental cloud infrastructure. It is a powerful arhcitecuture that can be used to provide Infrastructure as a Service (IaaS) to users. Traditionally, OpenStack has comprised of in- stance management project (Nova), object stor- age project (Swift) and image repository project (Glance). Previously, networking has not re- ceived much attention of OpenStack community and the network management responsibility was delegated to Nova services, which can provide flat network configuration or VLAN segmented networks [7]. This basic networking capability on one hand makes it difficult for tenants to setup multi-tier networks (in flat networking mode) and suffers from scalability issues (in VLAN mode) on the other hand [8]. Fortunately, OpenStack community has been cognizant of these limitations and has taken an initiative to enhance the networking capabilities of OpenStack. The new OpenStack project, Quantum [4], is designed to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Open- stack services (e.g., Nova [9]) [4] (See Sec- tion II for details). Essentially, Quantum would enable the tenants to create virtual networks with great ease. The modular architecture and standardized API can be leveraged to provide plugins for firewalls and ACLs etc. [4]. Even in this short span of time, since Quantum’s inception, multiple plugins have been developed to work with Quantum service. The one partic- ularly relevant to this work is a plugin for an OpenFlow Controller called Ryu [10]. Although Ryu project seems to be an at- tempt to integrate advantages of OpenFlow with OpenStack, it suffers from lack of a very fundamental requirement of cloud com- puting infrastructure: Scalability (more details in Section III). In this work we try to address this shortcoming by providing a more scalable OpenFlow plugin for Quantum project. More- over, as a proof of concept of management
  • 2. applications that largely benefit from being run by a logically-centralized controller, we demonstrate the promising performance of an application for virtual machine migration that could run on top of our controller. The rest of the paper is organized as follows: In Section II, we provide a brief overview of OpenStack project and its different parts. In Section III, we present our approach for ad- dressing the shortcomings of already available OpenFlow controller plugin and current status of the project. In Section V we explain the VM migration application and present its results. Finally, we provide related work in VI. The paper concludes in VII. II. BACKGROUND Since we are extensively using OpenStack, Nova, Quantum and Open vSwitch, we provide a brief overview of them in this section A. OpenStack OpenStack is an open source cloud manage- ment system (CMS) [8]. It is comprised of five core projects namely Nova, Swift, Glance, key- stone and Horizon. Quantum is another project that will be added to the core of upcoming releases of OpenStack. Before the introduction of Quantum, the networking functionality was the responsibility of Nova (which was mainly designed to provide instances on demand). B. Nova As pointed out earlier, the primary responsi- bility of Nova was to provide a tenant facing API that a tenant can use to request new in- stances in the infrastructure. These requests are then channeled by the nova-api to the Asyn- chronous Message Passing Queue (AMPQ) to the scheduler which in turn assigns the task of instantiating VM to one of the compute- workers. Not only this, Nova was also responsi- ble for setting up the network configuration of these instances. The three modes of networking provided by Nova are: Flat networking, Flat DHCP and VLAN networking. Cloud operator can select one of them by choosing the appro- priate networking manager [7]. C. Quantum Quantum is a “virtual network service” that aims to provide a powerful API to define the network connectivity between interface devices implemented by other OpenStack services (e.g., vNICs from Nova virtual servers) [4]. It provides many advantages for cloud tenants by providing them with an API to build rich networking topologies, such as creating multi- tier web application topologies, and configuring advanced network capabilities in the cloud like providing end-to-end QoS guarantees. More- over, it provides a greatly extensible framework for building different plugins. This has facili- tated the development of some highly utilized plugins like Open vSwitch or Nicira Network Virtualization Platform (NVP) Plugin [11], [12]. Originally, Quantum is focused on providing L2 connectivity between interfaces. However, its extensibility provides an avenue for innovation by allowing new functionality to be provided via plugins. We leverage this chance to develop a “controller” as one of such plugins. Generally, the role of a Quantum plugin is to translate logical network modifications received from the Quantum Service API and map them to specific operations on the switching infrastructure. Plu- gins are able to expose advanced capabilities beyond L2 connectivity using API extensions. D. Open vSwitch Open vSwitch (OVS) is a software switch that resides in the hypervisor and can provide connectivity between the guests that reside in that hypervisor. It is also capable of speaking to an OpenFlow controller, that can be locally or remotely located on another host. OVS is handy as it allows the network state associated with a VM to be transfered along with the VM on its migration and thus reduces the configuration burden of operators. E. Floodlight Floodlight [2] is a Java based OpenFlow controller that has forked from one of the two pioneering OpenFlow controllers (the second one being NOX) developed at Stanford called Beacon controller [13]. Our choice of using 2
  • 3. Floodlight is attributed to the simplicity and yet high performance of the controller but we believe that other controllers [14] can serve as reasonably good alternatives. F. Ryu The closest work related to our work is Ryu which is an open-sourced Network Operating System for OpenStack project which provides a logically centralized control and an API that makes it easy for operators to create new network management and control applications. It supports OpenFlow protocol to modify the behavior of network devices [10]. Ryu manages the network L2 segregation of tenants without using VLAN. It creates in- dividual flows for inter-VM communications and it has been shown in the literature that such approaches do not scale to data center networks since they exhaust switch memory quite fast [15], [16]. III. OUR APPROACH We provide an OpenFlow Plugin for Quan- tum that leverages Floodlight controller to pro- vide better scalability. Among different Open- Flow Controllers, we decided to use FloodLight which is designed to be an enterprise grade and high performance controller [2]. Although we provide our plugin using FloodLight as the proof of concept, we believe that it should be easy to extend our approach for other standard OpenFlow controllers, in case the providers and tenants of data centers prefer to deploy other controllers. We leave detailed explanation of ap- plicability of our approach to other controllers for future work. Our Plugin takes requests from the Quantum API for creation, updating, and deletion of network resources and implements them on the underlying network. In addition to the plugin, an agent is loaded on each Nova VM, that handles the creation of Virtual Interfaces for the VM and attaches them to the network provided by Quantum. Our solution will leverage Open vSwitch as an OpenFlow based virtual switch to provide the underlying network to Quantum, and configure the vSwitch via the Floodlight OpenFlow controller. A. Challenges The main challenge for providing OpenFlow controllers for Quanum is scalability. The Ryu plugin that currently exists takes the approach of creating flows for all inter-VM traffic. This will not scale as the number of flows ex- ceeds the Ternary Content Addressable Memory (TCAM) capabilities of the OpenFlow switches. A detailed discussion of such scalability is- sues of OpenFlow is provided in [15] where the authors show that the required number of flow entries in data centers and high perfor- mance networks (where an average ToR switch might have roughly 78,000 flow rules if the rule timeout is 60 second) exceeds the TCAM memory of commodity switches used for Open- Flow rules (they claim a typical model supports around 1,500 OpenFlow rules). As an alternative approach, we implement tenant network separation with VLANs, which allows for a more scalable solution. We ac- knowledge that VLANs also have scaling limi- tations. Hence, a possible extension of our work would be to use some form of encapsulation to scale even further. B. Architecture Our Quantum plugin is responsible for taking network creation requests and translating the network ID given by Quantum to a VLAN, and storing these translations in a database. The plugin handles the creation of an Open vSwitch bridge and keeps track of the logical network model. The agent and plugin keep track of the interfaces that are plugged into the virtual network, and contact Floodlight for new traffic that enters the network. Traffic is tagged with a VLAN ID by OpenFlow and Floodlight, based on the network that the port is assigned to and port source MAC address. Once tagged, the network traffic is forwarded using a learning switch configured on Floodlight to control the vSwitch. As a result we will have VM traffic isolated on a per tenant basis through VLAN tagging and OpenFlow control. Figure 1 shows the architecture of our plu- gin. As shown, tenants pass on the commands 3
  • 4. Fig. 1. Plugin Architecture. to Quantum manager using nova-client. Quan- tum manager relays these calls to the Flood- light plugin that actually implements the cre- ate/read/update/delete (CRUD) functionalites. The plugin realizes these functions by creating a mapping between each tenant’s network ID and VLAN ID, which is stored in the database. Whenever there is a new port attached to the quantum network, the plugin adds a correspod- ing port to the OVS-bridge and stores mapping between the port, and VLAN ID in the database. Finally, quantum agent which runs as a daemon on each hypervisor keeps polling the database and OVS bridge for any changes and when any change is observed it is communicated to Floodlight client. This client then uses REST- ful API to talk with the Floodlight controller module. This way controller would know about the port,network ID and VLAN ID mappings. Whenever a new packet arrives at the OVS for which it does not have any entry, the packet is sent to the controller for decision. Controller then pushes the rules in the OVS telling it which VLAN ID to use to tag the packets as well as to encapsulate the packet with physical host addresses. Furthermore, controller will also add entry to each physical switch with the action to pass the packet through normal packet process- ing pipeline, and the packet would be forwarded based on simple learning switch mechanism. Thus, number of entries in TCAM of each physical switch would be directly proportional to the distinct VLANs that pass through that switch. IV. ANALYSIS In this section we provide an analyis of our approach compared to Ryu in terms of number of flow entries that we can expect to see at the switches. Like Tavakoli et al. [17], we assume that there are 20 VMs per server each haveing 10 (5 incoming and 5 outgoing) concurrent flows. In such a set up, a VM-VM flow based approach like Ryu will not scale. Figure 2 shows comparison between the Ryu and our approach. Here we calculate the number of flow table entries based on the number of fields of flow matching rules that are specified (unspecified fields are wildcarded) when push- ing such rules in the OpenFlow switches. In case of Ryu the match rules are based on the source and destination MAC addresses of the VMs (assuming rest of the fields as wildcar) and hence the top of rack (ToR) switch would contain 20 servers/rack x 20 VMs/server x 10 concurrent flows/VM = 4000 entries to be kept in ToR’s TCAM. Whereas in our approach we are aggregating the flow entries based on the VLAN tag of the packet i.e., we will have one matching rule for each VM, at the physical switches (this is the worst case scenario that assumes each VM on the server belongs to a different tenant). Thus, number of flow table entries that we need to store in ToR’s TCAMs is 10 times lesser than that of Ryu. V. AN EXAMPLE OF MANAGEMENT APPLICATIONS: VM MIGRATION OpenFlow and our plugin (as a realisation of it for OpenStack) could simplify operating man- agement operations by providing global view of the network and direct control over forwarding behavior. In this section, we provide an example 4
  • 5. Fig. 2. Comparison of expected flow table entries. of such applications: VM migration application, an application that is in charge of migrating virtual machines of tenants. We explain why such operation is useful, what the challenges are, and how our plugin can help VM migration by presenting some results. New advances in technologies for high-speed and seamless migration of VMs turns VM mi- gration into a promising and efficient means for load balancing, configuration, power saving, attaining a better resource utilization by real- locating VMs, cost management, etc. in data centers [18]. Despite these numerous benefits, VM migration is still a challenging task for providers, since moving VMs requires update of network state, which consequently could lead to inconsistencies, outages, creation of loops and violations of service level (SLA) agreement requirements [19]. Many applications today like financial services, social networking, recom- mendation systems, and web search cannot tol- erate such problems or degradation of service [15], [20]. On the positive side, SDN provides a power- ful tool for tackling these challenging problems: The ability to run algorithms in a logically centralized location, and precisely manipulate the forwarding layer of switches creates a new opportunity for transitioning the network be- tween two states. In particular, in this section, we seek the an- swer of the following question: given a starting network, and a goal network, each consisting of a set of switches each with a set of forwarding rules, can we come up with a sequence of Open- Flow instructions, to manipulate the starting network into the goal network, while preserving desired correctness conditions such as freedom of loops, and bandwidth guarantees? This prob- lem boils down to solving the following two sub-problems: determining the ordering of VM migrations, or the sequence planning; and for each VM that should be migrated, determin- ing the ordering of OpenFlow instructions that should be installed or discarded. To perform the transition while preserving correctness guarantees, we test the performance of the optimal algorithm, i.e., the algorithms that, among all the possible orderings for per- forming the migrations, determines the ordering that results in the minimum number of viola- tions. In particular, given the network topology, SLA requirements, and set of VMs that are to be migrated along with their new locations; this algorithm outputs an ordered sequence of VMs to migrate, and a set of forwarding state changes 1 . This algorithm runs in the SDN controller to orchestrate these changes within the network. To evaluate its performance of our design, we simulated its performance using realistic data center and virtual network topologies (as will be explained later). We find that, for a wide spectrum of workloads, this algorithm signif- icantly improves the performance of randomly ordering the migrations (up to 80%), in terms of the number of VMs that it can migrate without violating SLAs 2 . Allocating virtual networks on a shared phys- ical data center has been extensively studied before [21]–[23]. Both for the physical under- lying network and for the VNs, we borrow the topologies and settings used in these works. More specifically, for underlying topology, we test the algorithms for random graph, tree, fat- tree, D-Cell and B-Cube. For VNs, we use star, tree, 3-tier graph which is common for web service applications [21]–[23]. Furthermore, for initially allocating VNs before migrations, we 1We leave improving this algorithm like optimizing it to run faster to future work. 2For preserving SLA requirements while migrating, as the proof of concept, in this work we focus on avoiding bandwidth violation. 5
  • 6. use SecondNet’s algorithm [22], because of its low time complexity, achieving high utilization and support of arbitrary topologies. We select random virtual nodes to migrate, and pick their destinations from set of all sub- strate nodes with free capacity randomly. We acknowledge that diverse scenarios for which migrations are performed might require differ- ent mechanisms for node or destination selec- tions and such selections might impact the per- formance of algorithms. We leave exploration of such mechanisms and the performance of our heuristic over them to future work. Our experiments are performed on a Intel Core i7-2600K machine with 16GB memory. Figure 3 depicts shows the results for 10 round of experiments over a 200-node tree where each substrate node has capacity 2, sub- strate links have bandwidth 500MB, and VNs are in form of 9-node trees with links with 10MB bandwidth requirement. As it shows, the fraction of violations remains under 30% with applying optimal algorithm, while it can get close to 100% with some random ordering. 3 . Fig. 3. Fraction of migrations that would lead to violation of link capacities with different algorithms. VI. RELATED WORK In the following, we review the approaches taken to scale the multi-tenant cloud network. 3Rising trends in fraction of violations for random plan- ning is due to the fact that as the number of migrations increases, more and more VMs will be replaced from the original place specified by the allocation algorithm. This sub-optimal allocation of nodes of VNs makes the feasibility of a random migration less likely, e.g., it is more likely to encounter violation while migrating the 10th VM than the 1th. It is interesting to note that even with quite large number of migrations, the fraction of violations encountered by optimal soultion remains almost constant. These are relevant since any of the following approaches could be incorporated as an underlying mechanism of communication for Quantum plugin. Thus an understanding of pros and cons of them is useful for plugin design. Traditionally, VLANs (IEEE 802.1q) have been used as a mechanism for providing isolation between different tiers of a multi-tier applications and amongst different tenants and organizations that coexist in the cloud. Although, VLANs overcome the problem of L2 network by dividing the network into isolated broadcast domains, it still doesn’t enable agility of services; Number of hosts that a given VLAN can incorporate is still limited to few thousand hosts. Thus as [24] reports, any service that needs to expand will have to be accommodated in a different VLAN than where rest of its servers are hosted and hence leading to fragmentation of service. Furthermore, VLAN configuration is highly error prone and difficult to manage, if done manually. Although, it is possible to configure both the access ports and trunk ports automatically with the help of VLAN management policy server (VMPS) and VLAN trunking protocol (VTP) respectively, it is undesirable to do the latter because in this case network admins have to divide switches into VTP domains and each switch in a given domain has to participate in all the VLANs within that domain, which leads to unnecssary overhead (see [25] for further discussion). Additionally, since VLAN header provides only a 12-bit VLAN ID, we can have at most 4096 VLANs in the network. This is relatively low when considering the multipurpose use of VLANs. Virtualization in data centers has further exacerbated the situation as more segments need to be created. Virtual eXtensible LANs (VXLANs [26]) is a recent technology that is being standardized under IETF. VXLAN aims to eliminate the limitations of VLAN by introducing a 24-bit VLAN network identifier (VNI), which means that with VXLAN it would be possible to have 16M VLANs in the network. VXLAN makes use of Virtual Tunnel Endpoints (VTEPs) 6
  • 7. Fig. 4. Overview of VXLAN [27]. Fig. 5. VXLAN unknown unicast handling [27]. that lie in the software switch of hypervisors (or can be placed with access switches) and encapsulate packets with the VM’s associated VNI (see Figure 4). VTEPs use Internet Group Management Protocol (IGMP) to join multicast groups(Figure 5). This helps in eliminating the unknown unicast flood, which is now only send to VTEPs in the multicast group of sender’s VNI. Limitations: Since there can be 16M VLANS, which exceeds the maximum number of multicast groups, it is possible to have many VLANs, belonging to different VNIs, to share the same multicast group [27]. This is problematic both in terms of security and performance. TRansparent Interconnection of Lots of Links (TRILL) [28] is another interesting concept that is being standardized under IETF. It runs IS-IS routing protocol between bridges (called RBridges) so that each RBridge is aware of the network topology. Furthermore, a nickname acquisition protocol is run among RBridges so that each RBridge can identify others. When an ingress bridge receives a packet, it encapsulates the packet in a TRILL header that conists of the ingress switch’s nickname and egress switch’s nickname and an additional source and destination MAC addresses. These MAC addresses are swapped on each hop (just as done by routers). RBridges at the edge learn source MAC addresses of hosts for each incoming packet they encapsulate as an ingress switch and the MAC address of hosts for every packet they decapsulate as an egress switch. If a destination MAC address is unknown to ingress switch, that packet is then sent to all the switches for discovery. Furthermore, TRILL uses a hop count field in the header which is decremented on each hop by the Rbriges and hence prevents any forwarding loop. Limitations: Although TRILL and RBridges overcome STP’s limitations, they are not designed for scalability [29] [30]. Furthermore, since the TRILL header contains a hop count field that needs to be decremented at each hop, Frame Check Sequence (FCS) also needs to be calculated at each hop, which may in turn effect the forwarding performance of switches [31]. VII. CONCLUSION We discussed the benefits of using OpenFlow for cloud computing, described an open source project for cloud management, and explained why its available OpenFlow plugin does not provide an acceptable level of scalability. We layed out our design for an alternative plugin for Quantum that would side step the scalability issue of the current OpenFlow plugin. We also gave one instance of a data center management application (VM migration) that could benefit from our OpenFlow plugin for performing its task and tested its performance. Finally, we provided hints about possible future work in this direction. REFERENCES [1] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, “NOX: towards an operating system for networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 3, pp. 105–110, 2008. [2] “FloodLight OpenFlow Controller,” http://floodlight. openflowhub.org/. [3] “Open source software for building private and public clouds.” http://openstack.org/. 7
  • 8. [4] “Openstack - quantum wiki,” http://wiki.openstack. org/Quantum. [5] J. Cappos, I. Beschastnikh, A. Krishnamurthy, and T. Anderson, “Seattle: a platform for educational cloud computing,” in ACM SIGCSE Bulletin, vol. 41, no. 1. ACM, 2009, pp. 111–115. [6] Y. Vigfusson and G. Chockler, “Clouds at the cross- roads: research perspectives,” Crossroads, vol. 16, no. 3, pp. 10–13, 2010. [7] “Openstack compute administration manual - cactus,” http://docs.openstack.org/cactus/openstack-compute/ admin/content/networking-options.html. [8] “Openstack , quantum and open vswitch part i,” http://openvswitch.org/openstack/2011/07/25/ openstack-quantum-and-open-vswitch-part-1/. [9] “Novas documentation,” http://nova.openstack.org/. [10] “Ryu network operating system as openflow controller,” http://www.osrg.net/ryu/using with openstack.html. [11] “Open vSwitch: Production Quality, Multilayer Open Virtual Switch,” http://openvswitch.org/. [12] “Nicira Networks,” http://nicira.com/. [13] “Beacon Home,” https://openflow.stanford.edu/ display/Beacon/Home. [14] “List of OpenFlow Software Projects,” http://yuba. stanford.edu/∼casado/of-sw.html. [15] A. Curtis, J. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, and S. Banerjee, “Devoflow: Scaling flow management for high-performance networks,” in ACM SIGCOMM, 2011. [16] A. Curtis, W. Kim, and P. Yalagandula, “Mahout: Low-overhead datacenter traffic management using end-host-based elephant detection,” in INFOCOM, 2011 Proceedings IEEE. IEEE, 2011, pp. 1629–1637. [17] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker, “Applying nox to the datacenter,” Proc. HotNets (Oc- tober 2009), 2009. [18] V. Shrivastava, P. Zerfos, K. won Lee, H. Jamjoom, Y.-H. Liu, and S. Banerjee, “Application-aware virtual machine migration in data centers,” in Infocom, 2011. [19] M. Reitblatt, N. Foster, J. Rexford, and D. Walker, “Consistent updates for software-defined networks: Change you can believe in!”,” in HotNets, 2011. [20] M. Lee, S. Goldberg, R. R. Kompella, and G. Vargh- ese, “Fine-grained latency and loss measurements in the presence of reordering,” in SIGMETRICS, 2011. [21] Y. Zhu and M. H. Ammar, “Algorithms for assigning substrate network resources to virtual network com- ponents,” in INFOCOM, 2006. [22] C. Guo, G. Lu, H. J. Wang, S. Yang, C. Kong, P. Sun, W. Wu, and Y. Zhang, “SecondNet: A data center network virtualization architecture with bandwidth guarantees,” ser. CoNEXT, 2010. [23] H. Ballani, P. Costa, T. Karagiannis, and A. I. T. Rowstron, “Towards predictable datacenter networks,” in SIGCOMM, 2011. [24] A. Greenberg, J. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. Maltz, P. Patel, and S. Sengupta, “Vl2: a scalable and flexible data center network,” ACM SIGCOMM Computer Communication Review, vol. 39, no. 4, pp. 51–62, 2009. [25] M. Yu, J. Rexford, X. Sun, S. Rao, and N. Feamster, “A survey of virtual lan usage in campus networks,” Fig. 6. Snippet of code from OVS base class that we leveraged. Communications Magazine, IEEE, vol. 49, no. 7, pp. 98–103, 2011. [26] “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks,” tools.ietf. org/html/draft-mahalingam-dutt-dcops-vxlan-00/. [27] http://blogs.cisco.com/datacenter/ digging-deeper-into-vxlan/l. [28] “Routing Bridges (RBridges): Base Protocol Specifi- cation,” http://tools.ietf.org/html/rfc6325. [29] C. Tu, “Cloud-scale data center network architecture,” 2011. [30] “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement,” tools.ietf.org/html/rfc5556. [31] R. Niranjan Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat, “Portland: a scalable fault-tolerant layer 2 data center network fabric,” in ACM SIG- COMM Computer Communication Review, vol. 39, no. 4. ACM, 2009, pp. 39–50. 8
  • 9. Fig. 7. Snippet of the agent daemon that polls database and updates controller. 9