HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
Optimizing CDN Infrastructure for Live Streaming with Constrained Server Chaining
1. Optimizing CDN Infrastructure for Live Streaming
with Constrained Server Chaining
Zhenyun Zhuang∗ and Chun Guo†
∗College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA
Email: zhenyun@cc.gatech.edu
†Ying-Da-Ji Technologies, 603 Overseas High-Tech Venture Park,
NanShan District, Shenzhen 518057, China
Abstract—Content Delivery Networks (CDNs) are increasingly
being used to deliver live streaming on today’s Internet. The new
application type of live streaming exposes unique characteristics
and challenges that require more advanced design of CDN
infrastructure. Unlike traditional web-object delivering, which
allows CDN content servers to cache contents and thus typically
only involves certain CDN servers (e.g. edge content servers) for
delivering contents, live streaming requires a real-time full CDN
streaming path that spans across the streaming source, the CDN
ingest server, the CDN content server, and end-viewers. Though
the ingest server is typically fixed for a particular live stream,
appropriate content servers need to be selected for delivering the
stream to end viewers.
Though today’s CDNs typically employ layered infrastructure
for delivering live streaming, in this work, we propose a flat-
layered design which is referred to as Constrained Server
Chaining (CSC) for selecting optimal content servers to deliver
live streams. Rather than employing a strictly layered infrastruc-
ture, CSC allows CDN streaming servers to dynamically choose
upstream servers, thus saves transit cost for CDN providers.
I. INTRODUCTION
Content Delivery Networks (CDNs) [1]–[5] have been in-
tensively used to deliver both static and dynamic web contents.
A typical CDN infrastructure consists of Ingest Servers (for
accepting customer contents), Origin Servers (for serving edge
servers) and Edge Servers (for serving end users directly),
forming a layered structure. Depending on the scale of a CDN,
the number of servers varies from several to hundreds or even
thousands.
Live streaming is becoming increasingly popular, and it
is expected to be more pervasive in the near future. Major
CDN providers such as Akamai [1], Level3 [4], Limelight
[5] and Internap [3] are supporting delivering live streaming
with CDNs. When supporting live streaming, CDN servers
are divided into two categories based on their functionalities:
(i) Ingest Servers which accept customers’ live streams; and
(ii) Streaming Servers which transit live streams from ingest
servers to end users. Following the convention of serving web
contents, CDN providers typically organize these servers into
layered-structure and force streaming servers to retrieve live
streams from ingest servers and then serve to end viewers. 1
1Some CDNs further split streaming servers into edge and origin streaming
servers and force edges to retrieve from origins. For these CDNs, the notion
of streaming servers applies to both edge and origin servers.
Unlike other traditional web-based applications, live stream-
ing is associated with unique characteristics and in turn
presents special challenges to CDN infrastructures. First, a live
stream has to be delivered from the CDN ingest server to end
viewers (via other CDN servers) in a real-time fashion, which
is in sharp contrast with traditional CDN-based web delivering
where typically only the last hop (i.e., from CDN servers to
web users) needs to be optimized. Second, live streaming often
features high-bandwidth requirement. A typical live stream can
easily consumes up to 3 Mbps, thus there is a higher need for
reducing transit cost on CDN networks.
Though effectively addressing all challenges associated with
live streaming requires advanced design/redesign of many
mechanisms of CDN infrastructure, in this work, we focus
on optimizing the structure of CDN streaming servers and
propose a design called Constrained Server Chaining (CSC)
for supporting live streaming. CSC’s key idea is to allow CDN
streaming servers to form a constrained chain structure for
pulling live streams from the ingest server and delivering to
end viewers.
In the following presentation, we first provide some back-
ground information and motivate our design of CSC in Section
II. We then present the problem definition and the detailed
design of CSC in Section III and Section IV, respectively.
We perform prototype-based evaluation and show the results
in Section V. We also present related works in Section VI.
Finally we conclude the work in VII.
II. BACKGROUND AND MOTIVATION
Unique properties of live streaming motivate advanced
design of infrastructure for CDN providers. In this section,
we first provide some background information about CDN
and live streaming. We then use an illustrative example to
demonstrate the potential improvements with an optimized
infrastructure.
A. Backgrounds
CDN: CDNs can be used to deliver both static/dynamic
web contents and live streams. A typical CDN infrastructure
consists of ingest servers, origin servers and edge servers.
Depending on the scale of a CDN, the number of edge servers
typically vary from dozens to tens of thousands. The number of
origin servers is less than that of edge servers, while a typical
2. (a) Default Delivery (b) Optimized Delivery
Fig. 1. Live streaming delivered with CDN
CDN only has a handful of ingest servers. Various servers are
organized into POPs (i.e., Point Of Presence).
Live Streaming: Live streaming is a new type of applica-
tion that is increasingly being delivered by CDNs. Featuring
properties different from static contents, live streaming typi-
cally has high bandwidth requirement which exerts much pres-
sure to the underlying CDN infrastructure in the form of high
transit cost. Unlike static web contents, live streams cannot
be cached and have to maintain a full path from the customer
streaming source to the end viewers/users. Various types of
streaming servers can be used to support live streaming, and
many of them (e.g., Adobe Flash Media Server) allow the
chaining of streaming servers when delivering live streams.
B. Motivating example
Consider an example where Source S is ingesting a live
stream into CDN’s ingest server I. Assuming both streaming
servers S1 and S2 serve live streams to users. With strictly
layered CDN structure, both S1 and S2 have to pull the stream
from I, resulting CDN transit hops of IS1 and IS2, as shown
in Figure 1(a).
However, as shown in Figure 1(b), there exists another
transit structure which allows S2 to pull the stream from S1 as
opposed to I. The two CDN transit hops of IS1 and S1S2 have
a smaller aggregated CDN transit cost, as S1S2 is shorter than
IS2. Note that though the shown example is trivial at a first
look, the cost saving of a CDN with many streaming servers
can be huge. Specifically, the layered CDN will always result
in a star-like delivery map, while a tree-like delivery map can
have significantly shorter delivery path. 2
III. DESIGN OF CSC
In this section, we present the design of Constrained Server
Chaining (CSC). We begin by defining the problem. We then
present the design overview of CSC, its software architecture,
and the entire process with an illustrative example.
A. Problem definition
We consider the scenarios where a CDN provider has two
types of servers: ingest servers which accept published live
streams from customers, and streaming servers that transit the
streams to end users. We use IS and SS to denote the two server
sets, respectively. The CDN provider delivers a set of live
2Though the full delivery paths for some users are longer with tree-like
structure, the lengths can be easily capped by exerting some restrictions on
the sever selection process. We will address this concern in Section III.
streams denoted by LM. We further assume each live stream
LMi has a single ingesting source which connects to the ingest
server of ISi.
The problem we are addressing with CSC can be de-
fined as follows. For each of the live streams (denoted by
LMi and ingested to ISi), CSC needs to decide the optimal
streaming server set OSSi such that the CDN transit cost
can be minimized, and the end viewers’ experience is not
adversely compromised (which is captured by imposing a
delivery length cap). We use Ci to denote the aggregated transit
cost corresponding to the delivering LMi across ISi and OSSi.
With these assumptions, the input, output, and objective
function of CSC are:
• Input: (i) Live stream LMi, (ii) CDN ingest server ISi, (iii)
CDN streaming server set SS, and (iv) Delivery length cap;
• Output: Optimal streaming server set OSSi;
• Objective function: Total CDN transit cost is minimized.
B. Design Overview
As we observed from the illustrate example, CDN transit
cost (i.e., between CDN servers) can be saved by allowing
streaming servers to retrieve live streams from other streaming
servers rather than always retrieving from the CDN ingest
server. Such a design essentially converts the layered infras-
tructure most CDN providers are using to flat infrastructure.
CSC is a specialized server-selection mechanism designed
for live streaming applications. It aims at saving CDN transit
cost but not compromising end user experience. Note that in
this work, for ease of presentation, we use the distance as the
cost metric; but it would be easy to extend the cost metric to
include other factors such as pricing. We will revisit the cost
metric in Section IV.
CSC achieves the cost savings by allowing streaming servers
to pull live streams from other closer streaming servers. In
other words, rather than simply retrieving from the ingest
server as in a layered structure, streaming servers can con-
nect to other streaming servers to form a flat and layer-less
structure. In doing so, however, the resulting delivery path for
certain viewers playing the particular stream could be inflated
in the form of larger distances, which could cause other
undesirable effect such as larger playback-buffering time. To
address this, CSC imposes length-cap when making decisions
of selecting upstream servers. It always choose streaming
servers that satisfy the length-cap requirement. Whenever
necessary, it can also fall back to the default behavior of
directly pulling from ingest servers. We will elaborate on
discussions of length-cap in Section IV.
CSC’s operations are per-stream based, thus it maintains
appropriate state for every live stream. Specifically, for each
live stream, the active streaming servers that currently are
serving it are recorded so that other streaming servers can
retrieve from them. In addition to the per-stream state, CSC
also maintains a cost state that is used by all live streams. Cost
state maintains the server-server transit cost for each pairs of
CDN servers (i.e., ingest-streaming and streaming-streaming).
3. Server
Connection
Monitoring (SCM)
Server
Streaming
State
Cost Determination
(CD)
CSC
Engine
User
Request
Optimal
Streaming
Server
Connecting/
disconnecting
Cost
Updating
State
Retrieval
Server-Server
Cost State
State
Retrieval
Length Cap Determination (LCD)Assignment/Updating
Fig. 2. Software Architecture
Variables
Ui: New live stream user
LMk: requested live stream
SSei: The streaming server that serves Ui
CSSi: The candidate streaming server set
OSSo: The optimal upstream streaming server
1 Received a new live stream playback request from Ui
2 Determine the streaming server SSei that will serve Ui
3 Check whether SSei is serving LMk or not
4 If SSei is serving LMk
5 Begin playback
6 Else
7 Retrieve state information
8 Determine candidate streaming server set CSSi
9 Retrieve state of cost values and streaming
10 Determine the optimal upstream server OSSo
11 (The cost between SSei and OSSo is minimum)
12 Update server streaming state
13 End
Fig. 3. Pseudo code of CSC
C. Software Architecture and Overall Process
The software architecture of CSC is shown in Figure 2.
The heart of CSC is the CSC Engine, which takes the user
request, interacts with other components, and then decides
the optimal upstream streaming server. CSC selects streaming
servers based on the cost associated with the live streaming
delivery path, and the cost is CDN-specific. CSC maintains
the state of the cost values of each CDN hops and the server
streaming state for each live stream. The server-server cost
is determined by another component of Cost Determination
(CD), and the server streaming state is adjusted by Server
Connection Monitoring (SCM), which updates state whenever
a connection between CDN servers is setup or dismantled.
The overall process of how CSC works is as follows. When
a new live streaming viewer begins to playback a stream,
the direct CDN streaming server will be determined (e.g.,
by CDN’s GeoDNS service [6]). If the streaming server is
actively serving the stream, then it sends the stream to the user.
Otherwise, CDN will then attempt to determine the optimal
upstream streaming server. For this, the CSC engine retrieves
the cost values and streaming state of streaming servers, and
it then determines the candidate streaming server set. Each of
the streaming server in the candidate set has to satisfy three
requirements: (i) actively serving the requested stream; (ii)
available for more retrieval (e.g., not overloaded); and (iii)
retrieval from it does not compromise the length cap. Based
on the candidate set, CSC engine then selects the optimal
streaming server that incurs the smallest transit cost (i.e., the
transit cost between the selected upstream server and the direct
streaming server). Finally, it updates the streaming server state
and begins stream retrieval. CSC’s pseudo code is shown in
Figure 3.
D. Illustrative example
We now use an example to illustrate how CSC works.
Assuming a CDN has 1 ingest server I1 and 3 streaming
servers (S1, S2 and S3) to serve a particular live stream, as
shown in Figure 4(A).
When the first user U1 joins, it obtains the stream from S1.
Since other streaming servers are not serving the stream, S1
directly pulls from I1, as shown in in Figure 4(B). When the
second user U2 begins pulling the stream from S2, S2 notices
that both S1 and I1 are available upstream servers. Since S1 is
closer to it, S2 pulls from S1, as shown in Figure 4(C).
Finally, when U3 joins, its streaming server S3 can pull from
either I1, S1 or S2. Though S2 is the closest server to S3,
pulling from S2 results in a longer path compared to directly
connecting to I1. Assuming connecting to S2 will result in a
delivery path (i.e., I1S1S2S3) that exceeds the length threshold,
then S2 will not be included in the candidate streaming server
set. Thus, S3 will instead pull from I1, which results in a setup
shown in in Figure 4(D).
IV. CSC COMPONENTS
We now describe the detailed design of CSC components.
A. Cost Determination (CD)
In order to perform CSC operations, each delivery hop
(i.e., ingest-server to streaming-server and streaming-server to
streaming-server) on CDN infrastructure has to be associated
with quantifiable cost. Note that in this work for easier
presentation we use physical distance as the sample cost, but
it should be straightforward to define other types of cost such
as network distance, price, and reliability, etc. For instance,
using Dij and Pij to denote the physical distance and pricing
of the network path between servers i and j, respectively, a
4. I1
S1
S2
S3 S1
S2
S3 S1
S2
S3 S1
S2
S3
(A) (B) (C) (D)
I1 I1 I1
Fig. 4. Illustrative Example
CDN provider could derive a new cost of DPij by composing
the two types of cost. The detailed definition of combined cost
metrics is beyond the scope of this work.
B. Length Cap Determination (LCD)
Length Cap Determination (LCD) component maintains the
desired delivery length cap imposed by either CDN providers
or customers. Delivery length can be measured in the forms
of physical distance, network delay, number of intermediate
streaming servers, or network hops. The most simple fashion
to enforce length cap is to ask CDN providers to set the cap
values based on network environment, but it is desirable to
allow each customer to control per-stream cap values, since
each customer and stream may have different requirements on
delivery latency. The customer can preset the values for each
stream with CDN providers or set the values on-the-fly. For
instance, with Adobe Live Encoder and Flash Stream Servers,
the customer sources could use NetConnection.Connect() call
to pass-in cap values for each stream in Flash players.
C. Server Connection Monitoring (SCM)
The component of SCM (Server Connection Monitoring) is
used to keep the server streaming state fresh, in the sense
that whether a streaming server is actively serving a live
stream is up-to-date. SCM monitors the events of connection
and disconnection occurring on streaming servers. When a
streaming server receives requests from end viewers and
begins pulling from another streaming server or ingest server, a
connection event occurs. On the other hand, when a streaming
server is disconnected by all end viewers and stops pulling
from upstream CDN streaming servers, a disconnection event
occurs. For both types of events, SCM updates corresponding
state accordingly.
To capture the events of connections and disconnections,
streaming servers should have the ability to monitor such
events. The capability can be implemented inside various live
streaming servers that directly serve viewers. Examples of
live streaming servers are Wowza Media Server, Adobe Media
Server and IIS smooth streaming server. The capability could
be implemented in the form of server plug-ins.
D. Maintained States
CSC maintains two types of state information: Server-server
cost and Server streaming state. The server-server cost will
be maintained as a 2-dimension array, e.g., COST[NS][NS],
where NS is the set size of all CDN servers (i.e., both ingest
servers and streaming servers). The cost information will be
used by all live streams being delivered by the particular CDN.
The server streaming state will be maintained as a 2-
dimension array, e.g., STREAMING[NS][NM], where NS is
the set size of CDN servers, and NM is the set size of live
streams. Initially all elements of STREAMING are set to
zero. When a streaming server CSi starts playing a stream
LMj, STREAMING[i][j] will be set to 1. When an streaming
server CSi stops playing a stream LMj, then corresponding
STREAMING array element will be reset to 0.
E. CSC Engine
CSC Engine performs the actually computation and com-
parison of costs, then selects the most appropriate upstream
streaming server given a live streaming pulling request.
When such requests come, CSC engine firstly decides
the streaming server that will directly serve the user. The
determination can be easily done by CDN’s existing server-
selection mechanism (e.g., GeoDNS). If the streaming server
is actively serving the stream (i.e., STREAMING[][] element
is 1), then the stream is immediately served and no further
actions will be taken.
If the streaming server is not actively serving the stream,
CSC engine then decides the candidate streaming servers
by filtering out the unavailable or overloaded servers, then
retrieves the server-server cost, server streaming state, and
the length cap values for the particular stream. With a set
of candidate streaming servers, it then determines the optimal
upstream streaming server to pull the stream from based on
the algorithms described in Figure 3.
V. EVALUATION
To gain concrete understanding of CSC regarding the im-
pact on both CDN and viewers, we consider the following
simplified scenario to quantify the saving on CDN transit cost
and the performance experienced by end viewers. We assume
an USA mainland CDN provider which has 1 ingest POP
(Point of Presence) and 4 streaming POPs, where each POP
has only a single server. We further assume a default layered-
CDN scenario where each streaming server has to retrieve
live streams from the ingest server. For comparison, we also
assume a CSC-applied scenario where streaming servers can
retrieve live streams from either other streaming servers or the
ingest server.
For simplicity, we consider a simple performance metric of
physical distance in the unit of thousand-miles (i.e., KMiles)
for both CDN transit-cost saving and live-stream viewers.
5. Ingest: I
Server: S2
Server: S1
Server: S3
Source: S
Server: S4
(a) Prototype map
User: U1
Ingest: I
Server: S2
Server: S1
User: U2
Server: S3
User: U3
Source: S
Server: S4
User: U4
(b) Default CDN
User: U1
Ingest: I
Server: S2
Server: S1
User: U2
Server: S3
User: U3
Source: S
Server: S4
User: U4
(c) CSC
Fig. 5. Prototype Testbed
Thus, the larger the distances are, the higher cost CDN
providers pay.
A. Prototype Setup
We built a prototype with Adobe Flash Media Server (FMS)
4.0. The prototype consists of 5 servers deployed in 5 cities,
all running Windows Server 2008 R2. One of the servers
(I) is accepting live stream injection, while other streaming
servers (S1 through S4) can serve viewers directly and pull
streams from other servers, as shown in Figure 5(a). The
streaming source encoder is Flash Media Live Encoder 3.1.
Both ingesting and streaming protocols are RTMP.
The streaming servers run customized Server Side Action-
Script to dynamically pull from appropriate streaming servers
(Sample code shown in Figure 6). Specifically, when a player
issues NetConnection.connect(“rtmp://S1/app1/stream1”,
[cap-values]) call to streaming server S1 requesting live
stream stream1 of application app1, if S1 is serving stream1,
it will just do acceptConnection(). Otherwise, if S1 is not
serving stream1, it first determines the optimal streaming
server to pull from. It then creates a Stream object and pulls
from the optimal streaming server. The cap-values passed in
with the call is also remembered for CSC engine.
In the evaluation, there are totally 4 users, and their joining
sequence is U1 through U4. Note that each of the user is
directly served by S1 through S4, respectively. We first consider
the default scenario where layered CDN infrastructure is firmly
enforced. The stream delivering map is shown in Figure 5(b).
As shown, all streaming servers pull the live stream from I.
For comparison, we also consider an optimized flat scenario
where CSC is applied, and the eventual network setup is shown
in Figure 5(c). S1 and S4 are directly pulling streams from I,
while S2 and S3 rely on S1 for relaying.
application.onAppStart = function(){
CurServers = ObtainCandidateServers(“stream1”);
OptimalServer = DecideOptimalServer(CurServers);
nc = new NetConnection(); //Retrieval from OptimalServer
myStream = Stream.get(“stream1”);
nc.onStatus = function(info){
if(info.code == “NetConnection.Connect.Success” ){
myStream.play(“stream1”, -1, -1, true, nc);
} }
nc.connect(“rtmp://OptimalServer/app1”);
}
Fig. 6. FMS Server Side Code for Handling “app1/stream1” Request
B. CDN Transit Cost
Since we use the cost metric of physical distance, in the
following we show the transit cost results in the unit of
KMiles. For both scenarios, we count the CDN delivery
lengths of server-server paths for each streaming server and the
aggregated CDN delivery path. As shown in Figure 7, the blue
bars show the default scenario, while the red bars show the
CSC scenario. Specifically, first, out of four CDN delivering
paths, two paths (i.e., IS2 and IS3) are shorter than the default
scenario, while the other two paths (i.e., IS1 and IS4) are the
same as in the default scenario.
Second, the aggregated server-server distances are 5.78
Kmiles and 3.64 Kmiles for two scenarios, respectively. The
reason for reduced distance in CSC scenario is that some
streaming servers (i.e., S2 and S3) are pulling from S1 as
opposed to from I. For the particular setup, CSC results in
much smaller CDN transit cost (more than 37% of reduction).
These results suggest that CSC is able to gain significant
transit cost saving inside CDN. Note that in this simple
scenario, we assume each POP only has a single CDN server.
With more servers, the saved transit cost can be more signifi-
cant as each server will incur a separate delivery path.
C. Delivery Lengths of Individual Viewers
Since each live stream has to delivered from the stream
source all the way to the end users in a real-time fashion, the
end-users’ performance could be impacted by the length of
the entire delivery path in the form of start-up playback delay
and path reliability. Specifically, with a longer delivery path,
an user oftentimes experiences larger playback delay, and the
connection is more likely less reliable.
We plot the CDN delivery length of each individual user
in Figure 7(b). We observe that U1 and U4 have the same
delivery lengths in both scenarios, while U2 and U3 experience
longer paths with CSC. The aggregated distances for the two
scenarios are 10.41 Kmiles and 11.42 Kmiles, respectively,
or 9.70% of increase. These results show that some users’
experience can be slightly negatively affected by CSC, which
is expected, as apparently the default layered star-like network
results in shortest paths for each individual user.
Note that CDN providers often optimize the CDN delivery
paths (i.e., paths between CDN servers). For instance, Akamai
[1] and Internap [3] optimize their networks with patented
techniques [7], [8]. Therefore, it is important to realize that the
6. 0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
S1 S2 S3 S4 Total
Default
CSC
(a) CDN Transit Cost
0
2
4
6
8
10
12
U1 U2 U3 U4 Total
Default
CSC
(b) Individual Delivery Paths
Fig. 7. CDN transit cost and user experience
user-experienced performance will not be affected significantly
even with longer delivery path.
D. Money Talks
Taking a more direct business perspective, we now compare
the transit cost with respect to the pricing of bandwidth usage.
Depending on subscription packages, CDN providers may
need to pay ISPs for the bandwidth used. We assume each live
stream is 3Mbps. In the above scenario where only 4 streaming
servers are delivering the live stream, the saved bandwidth
per day would be 0.07 TB*Kmiles (or 25.35 TB*KMiles per
year). With 3000 streams, the saved bandwidth would be 210
TB*Kmiles (or 76.04 PB * Kmiles per year).
Though the exact CDN transit pricing varies across time
and subscription packages, we attempt to obtain concrete
cost values by simply assuming the transit cost of 0.10 per
GB/Kmile. With such a cost, CSC can help save $7.60 Million
per year with 3000 streams, or about $2.53K per stream.
VI. RELATED WORK
Content Delivery Networks (CDNs) have been carried out
by various providers to expedite the web access [1]–[5]. The
techniques used by CDNs for delivering conventional web
contents are explained in related writings [9], [10]. Despite
the popularity and pervasiveness of CDNs, many research
problems and system-building challenges persist for optimiz-
ing CDN infrastructure and addressing challenges associated
supporting various types of applications. For instance, as we
demonstrated in this work, how to efficiently support the
application of live streaming with CDNs deserves more study.
Though various network architectures have been proposed
and utilized for different deployment scenarios with the rapid
development of Internet technology [11]–[14], CDN providers
typically apply layered structure (e.g., ingest-layer, origin-
layer and edge-layer) due to the caching-property of delivering
static web contents tracing back to Day-1 of CDN industry.
Realizing that CDNs and P2P can complement each other,
many research efforts also attempt to integrate these two
realms into one [15]. Unlike these works that combine the
merits of both CDNs and P2P, our work takes an inside
perspective and attempts to refine CDN delivery infrastructure.
Live streaming has been studied and analyzed from various
perspectives [16]–[18]. The unique properties of live stream-
ing, coupled with CDN-assisted delivery, justifies a specialized
design of CDN infrastructure that specifically serves live
streaming and saves CDN transit cost. To our best knowledge,
this work is the first to consider and address the problem of
saving transit cost for delivering live streaming with CDNs.
VII. CONCLUSION
In this work, we propose a flat CDN infrastructure for
delivering live streaming with CDNs. Referred to as CSC, the
new infrastructure can greatly reduce CDN transit cost without
significantly affecting users’ experience.
REFERENCES
[1] “Akamai technologies,” http://www.akamai.com/.
[2] “At&t inc.” http://www.att.com/.
[3] “Internap network services corporation,” http://www.internap.com/.
[4] “Level 3 communications, llc.” http://www.level3.com/.
[5] “Limelight networks,” http://www.limelightnetworks.com/.
[6] “Bind geodns,” http://www.caraytech.com/geodns/.
[7] Internap-MIRO, “Method and system for optimizing routing through
multiple available internet route providers,” 2005.
[8] A. Patent, “Optimal route selection in a content delivery network,”
United States Patent 7274658, 2007.
[9] K. Park, W. W. (editors, H. Kung, and C. Wu, “Content networks:
Taxonomy and new approaches,” 2002.
[10] D. C. Verma, S. Calo, and K. Amiri, “Policy based management of
content distribution networks,” IEEE Network Magazine, vol. 16, pp.
34–39, 2002.
[11] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan,
“Chord: A scalable peer-to-peer lookup service for internet applications,”
in SIGCOMM ’01: Proceedings of the 2001 conference on Applications,
technologies, architectures, and protocols for computer communications,
San Diego, CA, USA, 2001.
[12] A. I. T. Rowstron and P. Druschel, “Pastry: Scalable, decentralized
object location, and routing for large-scale peer-to-peer systems,” in
Middleware ’01: Proceedings of the IFIP/ACM International Conference
on Distributed Systems Platforms Heidelberg, London, UK, 2001.
[13] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Schenker, “A
scalable content-addressable network,” in SIGCOMM ’01: Proceedings
of the 2001 conference on Applications, technologies, architectures, and
protocols for computer communications, San Diego, CA, USA, 2001.
[14] I. Stoica, D. Adkins, S. Zhuang, S. Shenker, and S. Surana, “Internet
indirection infrastructure,” IEEE/ACM Trans. Netw., vol. 12, no. 2, pp.
205–218, 2004.
[15] H. Yin, X. Liu, T. Zhan, V. Sekar, F. Qiu, C. Lin, H. Zhang, and B. Li,
“Livesky: Enhancing cdn with p2p,” ACM Trans. Multimedia Comput.
Commun. Appl., vol. 6, pp. 16:1–16:19, August 2010.
[16] K. Sripanidkulchai, B. Maggs, and H. Zhang, “An analysis of live
streaming workloads on the internet,” in Proceedings of the 4th ACM
SIGCOMM conference on Internet measurement, ser. IMC ’04, 2004.
[17] J. He, A. Chaintreau, and C. Diot, “A performance evaluation of scalable
live video streaming with nano data centers,” Comput. Netw., vol. 53,
pp. 153–167, February 2009.
[18] C. Vicari, C. Petrioli, and F. L. Presti, “Dynamic replica placement and
traffic redirection in content delivery networks,” SIGMETRICS Perform.
Eval. Rev., vol. 35, December 2007.