The document discusses datacenter network design and transport protocols. It begins with an introduction to traditional datacenter network topologies, which use a 2-3 level tree structure. It then covers fat-tree and DCell topologies as alternatives. The document also discusses how TCP, while commonly used, is not optimal for datacenter networks due to design assumptions like round-trip time that differ from wide-area networks. It suggests transport protocols designed for datacenter characteristics could improve performance.
1. 15: Datacenter Design and Networking
Zubair Nabi
zubair.nabi@itu.edu.pk
April 21, 2013
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 1 / 27
2. Outline
1 Datacenter Topologies
2 Transport Protocols
3 Network Sharing
4 Wrapping Up
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 2 / 27
3. Outline
1 Datacenter Topologies
2 Transport Protocols
3 Network Sharing
4 Wrapping Up
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 3 / 27
5. Introduction
Datacenters are traditionally designed in the form of a 2/3-level tree
Switching elements become more specialized and faster when we go
up the tree structure
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 4 / 27
6. Introduction
Datacenters are traditionally designed in the form of a 2/3-level tree
Switching elements become more specialized and faster when we go
up the tree structure
A three-level tree has a core switch at the root, aggregation switches
in the middle, and edge switches at the leaves of the tree
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 4 / 27
7. Introduction
Datacenters are traditionally designed in the form of a 2/3-level tree
Switching elements become more specialized and faster when we go
up the tree structure
A three-level tree has a core switch at the root, aggregation switches
in the middle, and edge switches at the leaves of the tree
Edge switches have a large number of 1Gbps ports and a small
number of 10Gbps ports
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 4 / 27
8. Introduction
Datacenters are traditionally designed in the form of a 2/3-level tree
Switching elements become more specialized and faster when we go
up the tree structure
A three-level tree has a core switch at the root, aggregation switches
in the middle, and edge switches at the leaves of the tree
Edge switches have a large number of 1Gbps ports and a small
number of 10Gbps ports
The 1Gbps ports connect end-hosts while 10Gbps ports connect to
aggregation switches
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 4 / 27
9. Introduction
Datacenters are traditionally designed in the form of a 2/3-level tree
Switching elements become more specialized and faster when we go
up the tree structure
A three-level tree has a core switch at the root, aggregation switches
in the middle, and edge switches at the leaves of the tree
Edge switches have a large number of 1Gbps ports and a small
number of 10Gbps ports
The 1Gbps ports connect end-hosts while 10Gbps ports connect to
aggregation switches
Aggregation and core switches have 10Gbps ports
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 4 / 27
10. Introduction
Datacenters are traditionally designed in the form of a 2/3-level tree
Switching elements become more specialized and faster when we go
up the tree structure
A three-level tree has a core switch at the root, aggregation switches
in the middle, and edge switches at the leaves of the tree
Edge switches have a large number of 1Gbps ports and a small
number of 10Gbps ports
The 1Gbps ports connect end-hosts while 10Gbps ports connect to
aggregation switches
Aggregation and core switches have 10Gbps ports
Partitioning if switches up the tree go down
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 4 / 27
11. Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 5 / 27
12. Oversubscription
Ideal value of 1:1 – All hosts may potentially communicate with others
at full bandwidth of their interface
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 6 / 27
13. Oversubscription
Ideal value of 1:1 – All hosts may potentially communicate with others
at full bandwidth of their interface
5:1 – Only 20% of the bandwidth is available (200Mbps)
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 6 / 27
14. Oversubscription
Ideal value of 1:1 – All hosts may potentially communicate with others
at full bandwidth of their interface
5:1 – Only 20% of the bandwidth is available (200Mbps)
Typical datacenter designs are oversubscribed by a factor of 2.5:1
(400Mbps) to 8:1 (125Mbps)
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 6 / 27
16. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
17. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Each k-port switch in the lower layer is directly connected to k/2 hosts
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
18. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Each k-port switch in the lower layer is directly connected to k/2 hosts
Each of the remaining k/2 ports is connected to k/2 of the k ports of the
aggregation switches
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
19. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Each k-port switch in the lower layer is directly connected to k/2 hosts
Each of the remaining k/2 ports is connected to k/2 of the k ports of the
aggregation switches
(k/2)2
core switches
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
20. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Each k-port switch in the lower layer is directly connected to k/2 hosts
Each of the remaining k/2 ports is connected to k/2 of the k ports of the
aggregation switches
(k/2)2
core switches
Each core switch has one port connected to each of the k pods
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
21. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Each k-port switch in the lower layer is directly connected to k/2 hosts
Each of the remaining k/2 ports is connected to k/2 of the k ports of the
aggregation switches
(k/2)2
core switches
Each core switch has one port connected to each of the k pods
The ith port of any core switch is connected to pod i
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
22. Fat-tree Topology
k-ary fat-tree has k pods
Each pod contains two layers of k/2 switches
Each k-port switch in the lower layer is directly connected to k/2 hosts
Each of the remaining k/2 ports is connected to k/2 of the k ports of the
aggregation switches
(k/2)2
core switches
Each core switch has one port connected to each of the k pods
The ith port of any core switch is connected to pod i
A k-ary fat-tree supports k3
/4 hosts
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 7 / 27
23. Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 8 / 27
24. DCell
Uses a recursively defined structure to interconnect servers
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 9 / 27
25. DCell
Uses a recursively defined structure to interconnect servers
Each server connects to different levels of DCells through multiple links
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 9 / 27
26. DCell
Uses a recursively defined structure to interconnect servers
Each server connects to different levels of DCells through multiple links
High-level DCells are built recursively from many low-level ones
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 9 / 27
27. DCell
Uses a recursively defined structure to interconnect servers
Each server connects to different levels of DCells through multiple links
High-level DCells are built recursively from many low-level ones
Fault tolerant as there is no single point of failure
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 9 / 27
28. Structure
Uses servers with multiple network ports and mini-switches to
construct its recursive structure
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 10 / 27
29. Structure
Uses servers with multiple network ports and mini-switches to
construct its recursive structure
DCell0 is the building block to construct larger DCells
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 10 / 27
30. Structure
Uses servers with multiple network ports and mini-switches to
construct its recursive structure
DCell0 is the building block to construct larger DCells
Consists of n servers and a mini-switch
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 10 / 27
31. Structure
Uses servers with multiple network ports and mini-switches to
construct its recursive structure
DCell0 is the building block to construct larger DCells
Consists of n servers and a mini-switch
High-level DCells are built recursively from many low-level ones
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 10 / 27
32. Structure
Uses servers with multiple network ports and mini-switches to
construct its recursive structure
DCell0 is the building block to construct larger DCells
Consists of n servers and a mini-switch
High-level DCells are built recursively from many low-level ones
DCell1 constructed using n +1 DCell0s
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 10 / 27
33. Structure
Uses servers with multiple network ports and mini-switches to
construct its recursive structure
DCell0 is the building block to construct larger DCells
Consists of n servers and a mini-switch
High-level DCells are built recursively from many low-level ones
DCell1 constructed using n +1 DCell0s
The same applies to DCellk
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 10 / 27
34. Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 11 / 27
35. Outline
1 Datacenter Topologies
2 Transport Protocols
3 Network Sharing
4 Wrapping Up
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 12 / 27
36. TCP and UDP
TCP: Connection-oriented with reliability, ordering, and congestion
control
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 13 / 27
37. TCP and UDP
TCP: Connection-oriented with reliability, ordering, and congestion
control
UDP: Connectionless with no ordering, reliability, or congestion control
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 13 / 27
38. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
39. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Common sockets API
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
40. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Common sockets API
But TCP was designed for a wide-area network
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
41. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Common sockets API
But TCP was designed for a wide-area network
Clearly, a datacenter is not a wide-area network
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
42. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Common sockets API
But TCP was designed for a wide-area network
Clearly, a datacenter is not a wide-area network
Significantly different bandwidth-delay product, round-trip time (RTT),
and retransmission timeout (RTO)
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
43. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Common sockets API
But TCP was designed for a wide-area network
Clearly, a datacenter is not a wide-area network
Significantly different bandwidth-delay product, round-trip time (RTT),
and retransmission timeout (RTO)
For example, due to the low RTT, the congestion window for each flow
is very small
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
44. TCP and Datacenter Networks
Communication between different nodes is thought of as just opening a
TCP connection between them
Common sockets API
But TCP was designed for a wide-area network
Clearly, a datacenter is not a wide-area network
Significantly different bandwidth-delay product, round-trip time (RTT),
and retransmission timeout (RTO)
For example, due to the low RTT, the congestion window for each flow
is very small
As a result, flow recovery through TCP fast retransmit is impossible,
leading to poor net throughput
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 14 / 27
45. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
46. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
In commodity switches the buffer pool is shared by all interfaces
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
47. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
In commodity switches the buffer pool is shared by all interfaces
If long flows hog the memory, queues can build up for the short flows
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
48. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
In commodity switches the buffer pool is shared by all interfaces
If long flows hog the memory, queues can build up for the short flows
Many-to-one communication patterns can lead to TCP throughput
collapse or incast
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
49. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
In commodity switches the buffer pool is shared by all interfaces
If long flows hog the memory, queues can build up for the short flows
Many-to-one communication patterns can lead to TCP throughput
collapse or incast
This can cause overall application throughput to decrease by up to 90%
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
50. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
In commodity switches the buffer pool is shared by all interfaces
If long flows hog the memory, queues can build up for the short flows
Many-to-one communication patterns can lead to TCP throughput
collapse or incast
This can cause overall application throughput to decrease by up to 90%
In virtualized environments, the time sharing of resources increases
the latency faced by the VMs
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
51. More problems for TCP
In production data centers, due to the widely-varying mix of
applications, congestion in the network can last from 10s to 100s of
seconds
In commodity switches the buffer pool is shared by all interfaces
If long flows hog the memory, queues can build up for the short flows
Many-to-one communication patterns can lead to TCP throughput
collapse or incast
This can cause overall application throughput to decrease by up to 90%
In virtualized environments, the time sharing of resources increases
the latency faced by the VMs
This latency can be orders of magnitude higher than the RTT between
hosts inside a datacenter, leading to slow progress of TCP connections
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 15 / 27
53. Reaction
Some large-scale deployments have abandoned TCP altogether
For instance, Facebook now uses a custom UDP transport
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 16 / 27
54. Reaction
Some large-scale deployments have abandoned TCP altogether
For instance, Facebook now uses a custom UDP transport
It might be a “kitchen-sink” solution but it is sub-optimal in a datacenter
environment
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 16 / 27
55. Reaction
Some large-scale deployments have abandoned TCP altogether
For instance, Facebook now uses a custom UDP transport
It might be a “kitchen-sink” solution but it is sub-optimal in a datacenter
environment
Over the years, a number of alternatives have been proposed
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 16 / 27
56. Datacenter TCP (DCTCP)
Uses Explicit Congestion Notifications (ECN) from switches to perform
active queue management-based congestion control
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 17 / 27
57. Datacenter TCP (DCTCP)
Uses Explicit Congestion Notifications (ECN) from switches to perform
active queue management-based congestion control
Switches set the congestion experienced flag in packets whenever the
buffer occupancy exceeds a small threshold
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 17 / 27
58. Datacenter TCP (DCTCP)
Uses Explicit Congestion Notifications (ECN) from switches to perform
active queue management-based congestion control
Switches set the congestion experienced flag in packets whenever the
buffer occupancy exceeds a small threshold
DCTCP uses this information to reduce the size of the window based
on a fraction of the marked packets
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 17 / 27
59. Datacenter TCP (DCTCP)
Uses Explicit Congestion Notifications (ECN) from switches to perform
active queue management-based congestion control
Switches set the congestion experienced flag in packets whenever the
buffer occupancy exceeds a small threshold
DCTCP uses this information to reduce the size of the window based
on a fraction of the marked packets
Enables it to react quickly to queue build and avoid buffer pressure
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 17 / 27
60. Multipath TCP (MPTCP)
Establishes multiple subflows over different paths between a pair of
end-hosts
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 18 / 27
61. Multipath TCP (MPTCP)
Establishes multiple subflows over different paths between a pair of
end-hosts
These subflows operate under a single TCP connection
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 18 / 27
62. Multipath TCP (MPTCP)
Establishes multiple subflows over different paths between a pair of
end-hosts
These subflows operate under a single TCP connection
The fraction of the total congestion window for each flow is determined
by its speed
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 18 / 27
63. Multipath TCP (MPTCP)
Establishes multiple subflows over different paths between a pair of
end-hosts
These subflows operate under a single TCP connection
The fraction of the total congestion window for each flow is determined
by its speed
Moves traffic away from the most congested paths
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 18 / 27
64. tcpcrypt
Backwards compatible enhancement to TCP that aims to efficiently
and transparently provide encrypted communication to applications
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 19 / 27
65. tcpcrypt
Backwards compatible enhancement to TCP that aims to efficiently
and transparently provide encrypted communication to applications
Uses a custom key exchange protocol that leverages the TCP options
field
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 19 / 27
66. tcpcrypt
Backwards compatible enhancement to TCP that aims to efficiently
and transparently provide encrypted communication to applications
Uses a custom key exchange protocol that leverages the TCP options
field
Like SSL, to reduce the cost of connection setup for short-lived flows, it
enables cryptographic state from one TCP connection to bootstrap
subsequent ones
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 19 / 27
67. tcpcrypt
Backwards compatible enhancement to TCP that aims to efficiently
and transparently provide encrypted communication to applications
Uses a custom key exchange protocol that leverages the TCP options
field
Like SSL, to reduce the cost of connection setup for short-lived flows, it
enables cryptographic state from one TCP connection to bootstrap
subsequent ones
Applications can also be made aware of the presence of tcpcrypt to
negate redundant encryption
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 19 / 27
68. Deadline-Driven Delivery (D3
)
Targets applications with distributed workflow and latency targets
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 20 / 27
69. Deadline-Driven Delivery (D3
)
Targets applications with distributed workflow and latency targets
Such applications associate a deadline with each network flow and the
flow is only useful if the deadline is met
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 20 / 27
70. Deadline-Driven Delivery (D3
)
Targets applications with distributed workflow and latency targets
Such applications associate a deadline with each network flow and the
flow is only useful if the deadline is met
Applications expose flow deadline and size information which is
exploited by end hosts to request rates from routers along the data path
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 20 / 27
71. Outline
1 Datacenter Topologies
2 Transport Protocols
3 Network Sharing
4 Wrapping Up
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 21 / 27
72. Introduction
Network resources are shared amongst the tenants, which can lead to
contention and other undesired behaviour
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 22 / 27
73. Introduction
Network resources are shared amongst the tenants, which can lead to
contention and other undesired behaviour
Network performance isolation between tenants can be an important
tool for:
Minimizing disruption from legitimate tenants that run network-intensive
workloads
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 22 / 27
74. Introduction
Network resources are shared amongst the tenants, which can lead to
contention and other undesired behaviour
Network performance isolation between tenants can be an important
tool for:
Minimizing disruption from legitimate tenants that run network-intensive
workloads
Protecting against malicious tenants that launch DoS attacks
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 22 / 27
75. Introduction
Network resources are shared amongst the tenants, which can lead to
contention and other undesired behaviour
Network performance isolation between tenants can be an important
tool for:
Minimizing disruption from legitimate tenants that run network-intensive
workloads
Protecting against malicious tenants that launch DoS attacks
The standard methodology to ensure isolation is to use VLANs
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 22 / 27
76. Virtual LAN
Acts like an ordinary LAN but end-hosts do no necessarily have to be
physically connected to the same segment
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 23 / 27
77. Virtual LAN
Acts like an ordinary LAN but end-hosts do no necessarily have to be
physically connected to the same segment
Nodes are grouped together by the VLAN
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 23 / 27
78. Virtual LAN
Acts like an ordinary LAN but end-hosts do no necessarily have to be
physically connected to the same segment
Nodes are grouped together by the VLAN
Broadcasts can also be sent within the same VLAN
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 23 / 27
79. Virtual LAN
Acts like an ordinary LAN but end-hosts do no necessarily have to be
physically connected to the same segment
Nodes are grouped together by the VLAN
Broadcasts can also be sent within the same VLAN
VLAN membership information is inserted into Ethernet frames
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 23 / 27
80. Rate-limiting End-hosts
In Xen the network bandwidth available to each domU can be rate
limited
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 24 / 27
81. Rate-limiting End-hosts
In Xen the network bandwidth available to each domU can be rate
limited
Can be used to implement basic QoS
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 24 / 27
82. Rate-limiting End-hosts
In Xen the network bandwidth available to each domU can be rate
limited
Can be used to implement basic QoS
The virtual interface is simply rate-limited
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 24 / 27
83. Outline
1 Datacenter Topologies
2 Transport Protocols
3 Network Sharing
4 Wrapping Up
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 25 / 27
84. The End
In reverse order:
1 Cloud stacks be used to turn clusters and datacenters into private and
public clouds
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 26 / 27
85. The End
In reverse order:
1 Cloud stacks be used to turn clusters and datacenters into private and
public clouds
2 Virtualization of computation, storage, and networking can allow many
tenants to co-exist
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 26 / 27
86. The End
In reverse order:
1 Cloud stacks be used to turn clusters and datacenters into private and
public clouds
2 Virtualization of computation, storage, and networking can allow many
tenants to co-exist
3 Most data does not fit the relational model and is more suited for
NoSQL stores
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 26 / 27
87. The End
In reverse order:
1 Cloud stacks be used to turn clusters and datacenters into private and
public clouds
2 Virtualization of computation, storage, and networking can allow many
tenants to co-exist
3 Most data does not fit the relational model and is more suited for
NoSQL stores
4 Data-intensive, task-parallel frameworks abstract away the details of
distribution, work allocation, sychronization, concurreny, and
communication; Perfect match for the cloud
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 26 / 27
88. The End
In reverse order:
1 Cloud stacks be used to turn clusters and datacenters into private and
public clouds
2 Virtualization of computation, storage, and networking can allow many
tenants to co-exist
3 Most data does not fit the relational model and is more suited for
NoSQL stores
4 Data-intensive, task-parallel frameworks abstract away the details of
distribution, work allocation, sychronization, concurreny, and
communication; Perfect match for the cloud
5 The future is Big Data and Cloud Computing!
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 26 / 27
89. References
1 Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat. 2008. A
scalable, commodity data center network architecture. In Proceedings
of the ACM SIGCOMM 2008 conference on Data communication
(SIGCOMM ’08). ACM, New York, NY, USA, 63-74.
2 Chuanxiong Guo, Haitao Wu, Kun Tan, Lei Shi, Yongguang Zhang, and
Songwu Lu. 2008. Dcell: a scalable and fault-tolerant network
structure for data centers. In Proceedings of the ACM SIGCOMM 2008
conference on Data communication (SIGCOMM ’08). ACM, New York,
NY, USA, 75-86.
Zubair Nabi 15: Datacenter Design and Networking April 21, 2013 27 / 27