SlideShare une entreprise Scribd logo
1  sur  14
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                      1




                             Dual-resource TCP/AQM
                       for Processing-constrained Networks
    Minsu Shin, Student Member, IEEE, Song Chong, Member, IEEE, and Injong Rhee, Senior Member, IEEE




   Abstract—This paper examines congestion control issues for                 processing capacity in the network components. New router
TCP flows that require in-network processing on the fly in                      technologies such as extensible routers [3] or programmable
network elements such as gateways, proxies, firewalls and even                 routers [4] also need to deal with scheduling of CPU usage
routers. Applications of these flows are increasingly abundant in
the future as the Internet evolves. Since these flows require use of           per packet as well as bandwidth usage per packet. Moreover,
CPUs in network elements, both bandwidth and CPU resources                    the standardization activities to embrace various network ap-
can be a bottleneck and thus congestion control must deal                     plications especially at network edges are found in [5] [6] as
with “congestion” on both of these resources. In this paper, we               the name of Open Pluggable Edge Services.
show that conventional TCP/AQM schemes can significantly lose                     In this paper, we examine congestion control issues for
throughput and suffer harmful unfairness in this environment,
particularly when CPU cycles become more scarce (which is likely              an environment where both bandwidth and CPU resources
the trend given the recent explosive growth rate of bandwidth). As            can be a bottleneck. We call this environment dual-resource
a solution to this problem, we establish a notion of dual-resource            environment. In the dual-resource environment, different flows
proportional fairness and propose an AQM scheme, called Dual-                 could have different processing demands per byte.
Resource Queue (DRQ), that can closely approximate propor-                       Traditionally, congestion control research has focused on
tional fairness for TCP Reno sources with in-network processing
requirements. DRQ is scalable because it does not maintain per-               managing only bandwidth. However, we envision (also it is
flow states while minimizing communication among different                     indeed happening now to some degree) that diverse network
resource queues, and is also incrementally deployable because                 services reside somewhere inside the network, most likely at
of no required change in TCP stacks. The simulation study                     the edge of the Internet, processing, storing or forwarding
shows that DRQ approximates proportional fairness without                     data packets on the fly. As the in-network processing is likely
much implementation cost and even an incremental deployment
of DRQ at the edge of the Internet improves the fairness and                  to be popular in the future, our work that examines whether
throughput of these TCP flows. Our work is at its early stage and              the current congestion control theory can be applied without
might lead to an interesting development in congestion control                modification, or if not, then what scalable solutions can be
research.                                                                     applied to fix the problem, is highly timely.
  Index Terms—TCP-AQM, transmission link capacity, CPU                           In our earlier work [7], we extended proportional fairness
capacity, fairness, efficiency, proportional fairness.                         to the dual-resource environment and proposed a distributed
                                                                              congestion control protocol for the same environment where
                         I. I NTRODUCTION                                     end-hosts are cooperative and explicit signaling is available
                                                                              for congestion control. In this paper, we propose a scal-

A      DVANCES in optical network technology enable fast
       pace increase in physical bandwidth whose growth rate
has far surpassed that of other resources such as CPU and
                                                                              able active queue management (AQM) strategy, called Dual-
                                                                              Resource Queue (DRQ), that can be used by network routers
                                                                              to approximate proportional fairness, without requiring any
memory bus. This phenomenon causes network bottlenecks                        change in end-host TCP stacks. Since it does not require any
to shift from bandwidth to other resources. The rise of new                   change in TCP stacks, our solution is incrementally deployable
applications that require in-network processing hastens this                  in the current Internet. Furthermore, DRQ is highly scalable in
shift, too. For instance, a voice-over-IP call made from a cell               the number of flows it can handle because it does not maintain
phone to a PSTN phone must go through a media gateway that                    per-flow states or queues. DRQ maintains only one queue per
performs audio transcoding “on the fly” as the two end points                  resource and works with classes of application flows whose
often use different audio compression standards. Examples                     processing requirements are a priori known or measurable.
of in-network processing services are increasingly abundant                      Resource scheduling and management of one resource
from security, performance-enhancing proxies (PEP), to media                  type in network environments where different flows could
translation [1] [2]. These services add additional loads to                   have different demands are a well-studied area of research.
  An early version of this paper was presented at the IEEE INFOCOM 2006,      Weighted-fair queuing (WFQ) [8] and its variants such as
Barcelona, Spain, 2006. This work was supported by the center for Broadband   deficit round robin (DRR) [9] are well known techniques to
OFDM Mobile Access (BrOMA) at POSTECH through the ITRC program                achieve fair and efficient resource allocation. However, the
of the Korean MIC, supervised by IITA. (IITA-2006-C1090-0603-0037).
Minsu Shin and Song Chong are with the School of Electrical Engineering       solutions are not scalable and implementing them in a high-
and Computer Science, Korea Advanced Institute of Science and Technol-        speed router with many flows is difficult since they need to
ogy (KAIST), Daejeon 305-701, Korea (email: msshin@netsys.kaist.ac.kr;        maintain per-flow queues and states. Another extreme is to
song@ee.kaist.ac.kr). Injong Rhee is with the Department of Computer
Science, North Carolina State University, Raleigh, NC 27695, USA (email:      have routers maintain simpler queue management schemes
rhee@csc.ncsu.edu).                                                           such as RED [10], REM [11] or PI [12]. Our study finds that
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                             2



these solutions may yield extremely unfair allocation of CPU        These constraints are called dual-resource constraints and a
and bandwidth and sometimes lead to very inefficient resource        nonnegative rate vector r = [r1 , · · · , rS ]T satisfying these
usages.                                                             dual constraints for all CPUs k ∈ K and all links l ∈ L is
   Some fair queueing algorithms such as Core-Stateless Fair        said to be feasible.
Queueing (CSFQ) [13] and Rainbow Fair Queueing (RFQ)
[14] have been proposed to eliminate the problem of main-             A1: We assume that each CPU k ∈ K knows the processing
                                                                               k
taining per-flow queues and states in routers. However, those        densities ws ’s for all the flows s ∈ S(k).
schemes are concerned about bandwidth sharing only and do
not consider joint allocation of bandwidth and CPU cycles.              This assumption is reasonable because a majority of Internet
Estimation-based Fair Queueing (EFQ) [15] and Prediction            applications are known and their processing requirements can
Based Fair Queueing (PBFQ) [16] have been also proposed             be measured either off-line or on-line as discussed below. In
for fair CPU sharing but they require per-flow queues and do         practice, network flows could be readily classified into a small
not consider joint allocation of bandwidth and CPU cycles           number of application types [15], [17]–[19]. That is, there
either.                                                             is a finite set of application types, a flow is an instance of
   Our study, to the best of our knowledge, is the first in          an application type, and flows will have different processing
examining the issues of TCP and AQM under the dual-                 densities only if they belong to different application types.
resource environment and we show that by simulation DRQ             In [17], applications have been divided into two categories:
achieves fair and efficient resource allocation without imposing     header-processing applications and payload-processing appli-
much implementation cost. The remainder of this paper is            cations, and each category has been further divided into a
organized as follows. In Section II, we define the problem and       set of benchmark applications. In particular, authors in [15]
fairness in the dual-resource environment, in Sections III and      experimentally measure the per-packet processing times for
IV, we present DRQ and its simulation study, and in Section         several benchmark applications such as encryption, compres-
V, we conclude our paper.                                           sion, and forward error correction. The measurement results
                                                                    find the network processing workloads to be highly regular and
        II. P RELIMINARIES : N ETWORK M ODEL AND                    predictable. Based on the results, they propose an empirical
        D UAL - RESOURCE P ROPORTIONAL FAIRNESS                     model for the per-packet processing time of these applications
                                                                    for a given processing platform. Interestingly, it is a simple
A. Network model
                                                                    affine function of packet size M , i.e., µk +νa M where µk and
                                                                                                              a
                                                                                                                 k
                                                                                                                              a
   We consider a network that consists of a set of unidirectional     k
                                                                    νa are the parameters specific to each benchmark application a
links, L = {1, · · · , L}, and a set of CPUs, K = {1, · · · , K}.   for a given processing platform k. Thus, the processing density
The transmission capacity (or bandwidth) of link l is Bl            (in cycles/bit) of a packet of size M from application a at
(bits/sec) and the processing capacity of CPU k is Ck (cy-                                          µk      k
                                                                    platform k can be modelled as M +νa . Therefore, the average
                                                                                                     a

cles/sec). These network resources are shared by a set of flows                             k
                                                                    processing density wa of application a at platform k can be
(or data sources), S = {1, · · · , S}. Each flow s is associated     computed upon arrival of a packet using an exponentially
with its data rate rs (bits/sec) and its end-to-end route (or       weighted moving average (EWMA) filter:
path) which is defined by a set of links, L(s) ⊂ L, and a
set of CPUs, K(s) ⊂ K, that flow s travels through. Let                      k           k          µk
                                                                                                    a    k
                                                                           wa ← (1 − λ)wa + λ(        + νa ),    0 < λ < 1.      (1)
S(l) = {s ∈ S|l ∈ L(s)} be the set of flows that travel                                             M
through link l and let S(k) = {s ∈ S|k ∈ K(s)} be the set                                                            µk     k
                                                                       One could also directly measure the quantity M +νa in Eq.
                                                                                                                      a
of flows that travel through CPU k. Note that this model is
                                                                    (1) as a whole instead of relying on the empirical model by
general enough to include various types of router architecture
                                                                    counting the number of CPU cycles actually consumed by a
and network element with multiple CPUs and transmission
                                                                    packet while the packet is being processed. Lastly, determining
links.
                                                                    the application type an arriving packet belongs to is an easy
   Flows can have different CPU demands. We represent this
                                 k                                  task in many commercial routers today since L3/L4 packet
notion by processing density ws , k ∈ K, of each flow s, which
                                                                    classification is a default functionality.
is defined to be the average number of CPU cycles required
                                                 k
per bit when flow s is processed by CPU k. ws depends on k
since different processing platforms (CPU, OS, and software)        B. Proportional fairness in the dual-resource environment
would require a different number of CPU cycles to process              Fairness and efficiency are two main objectives in re-
the same flow s. The processing demand of flow s at CPU k             source allocation. The notion of fairness and efficiency has
           k
is then ws rs (cycles/sec).                                         been extensively studied and well understood with respect
   Since there are limits on CPU and bandwidth capacities,          to bandwidth sharing. In particular, proportionally fair (PF)
the amount of processing and bandwidth usage by all                 rate allocation has been considered as the bandwidth sharing
flows sharing these resources must be less than or equal             strategy that can provide a good balance between fairness and
to the capacities at anytime. We represent this notion              efficiency [20], [21].
by the following two constraints: for each CPU k ∈ K,                  In our recent work [7], we extended the notion of pro-
             k
   s∈S(k) ws rs ≤ Ck (processing constraint) and for each           portional fairness to the dual-resource environment where
link l ∈ L,          s∈S(l) rs ≤ Bl (bandwidth constraint).         processing and bandwidth resources are jointly constrained.
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                                                                                                                                                      3


               flows                                                                                                     1.25                                                                                    1.25
                                                        Link queue                                                                  Processing-              Jointly-      Bandwidth-                                       Processing-          Jointly-      Bandwidth-
               s∈S                CPU queue




                                                                                                                                                                                          Normalized CPU usage
                                                                                               Normalized throughput
                                                                                                                                    limited                  limited       limited                                          limited              limited       limited
                                                                                                                         1.00                                                                                    1.00
            rs (bits/sec)                                                                                                             ∑ rs      B
          ws (cycles/bit)                                                                                                             r1    B                                                                                  ∑ws rs C
                                             CPU                      Link                                               0.75         r2    B                                                                    0.75          w1 r1 C
                                                                                                                                      r3    B                                                                                  w2 r2 C
                                       C (cycles/sec)                B (bits/sec)                                                     r4    B                                                                                  w3 r3 C
                                                                                                                         0.50                                                                                    0.50          w4 r4 C


Fig. 1.     Single-CPU and single-link network                                                                           0.25                                                                                    0.25


                                                                                                                           0                    1.5    2                         4.5 5                             0                1.5    2                        4.5 5
                                                                                                                               0    0.5 1                   2.5 3       3.5 4                                           0   0.5 1               2.5 3       3.5 4
                                                                                                                                                       wh                                                                                  wh
                                                                                                                                                             C B wa                                                                               C B wa
In the following, we present this notion and its potential                                                                                            (a)                                                                                 (b)
advantages for the dual-resource environment to define our                                                                1.25                                                                                    1.25
goal for our main study of this paper on TCP/AQM.                                                                                   Processing-              Jointly-     Bandwidth-                                        Processing-          Jointly-     Bandwidth-




                                                                                                                                                                                         Normalized CPU usage
                                                                                                 Normalized throughput
                                                                                                                                    limited                  limited      limited                                           limited              limited      limited
                                                                                                                         1.00                                                                                    1.00
  Consider an aggregate log utility maximization problem (P)                                                                           ∑ rs
                                                                                                                                       r1   B
                                                                                                                                                 B
                                                                                                                                                                                                                               ∑ws rs C
                                                                                                                                       r2
with dual constraints:
                                                                                                                                            B                                                                                  w1 r1 C
                                                                                                                         0.75                                                                                    0.75
                                                                                                                                       r3   B                                                                                  w2 r2 C
                                                                                                                                       r4   B
                                                                                                                         0.50                                                                                    0.50
                                                                                                                                                                                                                              w3 r3 C
                            P:     max             αs log rs                          (2)                                                                                                                                     w4 r4 C
                                       r                                                                                 0.25                                                                                    0.25
                                             s∈S
                                                                                                                           0                    1.5    2                         4.5 5                             0                1.5    2                        4.5 5
                                                                                                                                0   0.5 1                   2.5 3       3.5 4                                          0    0.5 1               2.5 3       3.5 4
                                                                                                                                                       wh                                                                                  wh
            subject to                   k                                                                                                                   C B wa                                                                              C B wa
                                 s∈S(k) ws rs      ≤ Ck ,      ∀k∈K                   (3)
                                                                                                                                                      (c)                                                                                 (d)
                                   s∈S(l) rs  ≤ Bl , ∀ l ∈ L                          (4)
                                        rs ≥ 0, ∀ s ∈ S                               (5)   Fig. 2. Fairness and efficiency in the dual-resource environment (single-
                                                                                            CPU and single-link network): (a) and (b) respectively show the normalized
where αs is the weight (or willingness to pay) of flow                                       bandwidth and CPU allocations enforced by PF rate allocation, and (c) and (d)
                                                                                            respectively show the normalized bandwidth and CPU allocations enforced by
s. The solution r∗ of this problem is unique since it is                                    TCP-like rate allocation. When C/B < wa , TCP-like rate allocation gives
                                                                                                                                       ¯
a strictly concave maximization problem over a convex                                       lower bandwidth utilization than PF rate allocation (shown in (a) and (c)) and
                                                                                            has an unfair allocation of CPU cycles (shown in (d)).
set [22]. Furthermore, r∗ is weighted proportionally fair since
                 ∗
           rs −rs
   s∈S αs rs   ∗    ≤ 0 holds for all feasible rate vectors r
by the optimality condition of the problem. We define this
allocation to be (dual-resource) PF rate allocation. Note that                                 •                         Bandwidth(BW)-limited case (θ∗ = 0 and π ∗ > 0): rs =  ∗
                                                                                                                         αs                       ∗                   ∗
this allocation can be different from Kelly’s PF allocation [20]                                                         π∗ , ∀s ∈ S, s∈S ws rs ≤ C and s∈S rs = B. From
                                                                                                                                                                        C
since the set of feasible rate vectors can be different from that                                                        these, we know that this case occurs when B ≥ wa and
                                                                                                                                                                            ¯
                                                                                                                                                       ∗     αs B
of Kelly’s formulation due to the extra processing constraint                                                            PF rate allocation becomes rs =            , ∀s ∈ S.
                                                                                                                                                             s∈S αs
(3).                                                                                           •                         Jointly-limited case (θ∗ > 0 and π ∗ > 0): This case
   From the duality theory [22], r∗ satisfies that                                                                        occurs when wh < B < wa . By plugging rs = ws θαs ∗ ,
                                                                                                                                       ¯      C
                                                                                                                                                    ¯                 ∗
                                                                                                                                                                             ∗ +π
                                                                                                                                                    ∗                    ∗
                              αs                                                                                         ∀s ∈ S, into s∈S ws rs = C and s∈S rs = B, we
         ∗
        rs =               k θ∗ +           ∗,   ∀ s ∈ S,     (6)                                                        can obtain θ∗ , π ∗ and consequently rs , ∀s ∈ S.
                                                                                                                                                               ∗
                   k∈K(s) ws k      l∈L(s) πl
                                                                                               We can apply other increasing and concave utility func-
where θ∗ =[θ1 , · · · , θK ]T and π ∗ =[π1 , · · · , πL ]T are Lagrange
              ∗           ∗              ∗            ∗
                                                                                            tions (including the one from TCP itself [23]) in the dual-
                                                                       ∗
multiplier vectors for Eqs. (3) and (4), respectively, and θk                               resource problem in Eqs. (2)-(5). The reason why we give
       ∗
and πl can be interpreted as congestion prices of CPU k                                     a special attention to proportional fairness by choosing log
and link l, respectively. Eq. (6) reveals an interesting property                           utility function is that it automatically yields weighted fair
that the PF rate of each flow is inversely proportional to the                               CPU sharing (ws rs = αs Cαs , ∀s ∈ S) if CPU is limited,
                                                                                                                ∗
aggregate congestion price of its route with the contribution                                                                                                                   s∈S

          ∗                           k                             ∗                       and weighted fair bandwidth sharing (rs = αs Bαs , ∀s ∈ S)
                                                                                                                                       ∗
of each θk being weighted by ws . The congestion price θk or                                                                                 s∈S
  ∗                                                                                         if bandwidth is limited, as illustrated in the example of Figure
πl is positive only when the corresponding resource becomes
                                                                                            1. This property is obviously what is desirable and a direct
a bottleneck, and is zero, otherwise.
                                                                                            consequence of the particular form of rate-price relationship
   To illustrate the characteristics of PF rate allocation in
                                                                                            given in Eq. (6). Thus, this property is not achievable when
the dual-resource environment, let us consider a limited case
                                                                                            other utility functions are used.
where there are only one CPU and one link in the network, as
                                                                                               Figures 2 (a) and (b) illustrate the bandwidth and CPU
shown in Figure 1. For now, we drop k and l in the notation
                                                                                            allocations enforced by PF rate allocation in the single-CPU
for simplicity. Let wa and wh be the weighted arithmetic and
                        ¯       ¯
                                                                                            and single-link case using an example of four flows with
harmonic means of the processing densities of flows sharing
                                                                 ws αs                      identical weights (αs =1, ∀s) and different processing densities
the CPU and link, respectively. So, wa =       ¯           s∈S        αs
                                              −1
                                                                                s∈S         (w1 , w2 , w3 , w4 ) = (1, 2, 4, 8) where wh =2.13 and wa =3.75.
                                                                                                                                      ¯            ¯
and wh =
    ¯         s∈S ws s∈S αs
                                  αs
                                 . There exist three cases as                               For comparison, we also consider a rate allocation in which
below.                                                                                      flows with an identical end-to-end path get an equal share of
                          ∗            ∗           ∗     α                                  the maximally achievable throughput of the path and call it
  • CPU-limited case (θ > 0 and π = 0): rs = w s ∗ ,
                                                         sθ                                 TCP-like rate allocation. That is, if TCP flows run on the
                         ∗                    ∗
    ∀s ∈ S,      s∈S ws rs = C and       s∈S rs ≤ B. From
                                                 C                                          example network in Figure 1 with ordinary AQM schemes
    these, we know that this case occurs when B ≤ wh and
                                                      ¯
                                 ∗       αs C                                               such as RED on both CPU and link queues, they would have
    PF rate allocation becomes rs = ws        αs , ∀s ∈ S.     s∈S
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                   4



the same long-term throughput. Thus, in our example, TCP-              A2: We assume that each TCP flow s has a constant RTT
like rate allocation is defined to be the maximum equal rate        τs , as customary in the fluid modeling of TCP dynamics
vector satisfying the dual constraints, which is rs = B , ∀s,
                                                         S         [23]–[28].
if B ≥ wa , and rs = wCS , ∀s, otherwise. The bandwidth
    C
           ¯               ¯a
and CPU allocations enforced by TCP-like rate allocation are         Let yl (t) be the average queue length at link l at time t,
shown in Figures 2 (c) and (d).                                    measured in bits. Then,
   From Figure 2, we observe that TCP-like rate allocation
                                                                                   s∈S(l) xs (t − τsl ) − Bl    yl (t) > 0
yields far less aggregate throughput than PF rate allocation        yl (t) =
                                                                     ˙                                        +              (7)
when C/B < wa , i.e., in both CPU-limited and jointly-
                   ¯                                                                s∈S(l) xs (t − τsl ) − Bl   yl (t) = 0.
limited cases. Intuitively, this is because TCP-like allocation    Similarly, let zk (t) be the average queue length at CPU k at
which finds an equal rate allocation yields unfair sharing          time t, measured in CPU cycles. Then,
of CPU cycles as CPU becomes a bottleneck (see Figure 2                                      k
                                                                                            ws xs (t − τsk ) − Ck           zk (t) > 0
(d)), which causes the severe aggregate throughput drop. In                        s∈S(k)
                                                                    zk (t) =
                                                                    ˙                            k
                                                                                                                        +
contrast, PF allocation yields equal sharing of CPU cycles,                         s∈S(k)      ws xs (t − τsk ) − Ck   zk (t) = 0.
i.e., ws rs become equal for all s ∈ S, as CPU becomes a                                                                           (8)
bottleneck (see Figure 2 (b)), which mitigates the aggregate          Let ps (t) be the end-to-end marking (or loss) probability
throughput drop. This problem in TCP-like allocation would         at time t to which TCP source s reacts. Then, the rate-
get more severe when the processing densities of flows have         adaptation dynamics of TCP Reno or its variants, particularly
a more skewed distribution.                                        in the timescale of tens (or hundreds) of RTTs, can be readily
   In summary, in a single-CPU and single-link network,            described by [23]
PF rate allocation achieves equal bandwidth sharing when                                            2
                                                                                Ms (1−p2 (t)) − 2 xs (t)ps (t)
                                                                                          s
                                                                                                                    xs (t) > 0
bandwidth is a bottleneck, equal CPU sharing when CPU is                             Ns τs        3 Ns Ms
                                                                     xs (t) =
                                                                      ˙                                2          +                (9)
a bottleneck, and a good balance between equal bandwidth                          Ms (1−ps (t))   2 xs (t)ps (t)
                                                                                                 − 3 Ns Ms          xs (t) = 0
                                                                                       Ns τ 2
sharing and equal CPU sharing when bandwidth and CPU                                        s


form a joint bottleneck. Moreover, in comparison to TCP-           where Ms is the average packet size in bits of TCP flow
like rate allocation, such consideration of CPU fairness in PF     s and Ns is the number of consecutive data packets that
rate allocation can increase aggregate throughput significantly     are acknowledged by an ACK packet in TCP flow s (Ns is
when CPU forms a bottleneck either alone or jointly with           typically 2).
bandwidth.                                                            In DRQ, we employ one RED queue per one resource. Each
                                                                   RED queue computes a probability (we refer to it as pre-
 III. M AIN R ESULT: S CALABLE TCP/AQM A LGORITHM                  marking probability) in the same way as an ordinary RED
                                                                   queue computes its marking probability.
   In this section, we present a scalable AQM scheme, called          That is, the RED queue at link l computes a pre-marking
Dual-Resource Queue (DRQ), that can approximately imple-           probability ρl (t) at time t by
ment dual-resource PF rate allocation described in Section II                 
for TCP-Reno flows. DRQ modifies RED [10] to achieve PF                          0
                                                                                                       yl (t) ≤ bl
                                                                                                        ˆ
                                                                               ml
                                                                              
allocation without incurring per-flow operations (queueing or                     bl −bl
                                                                                        (ˆl (t) − bl )
                                                                                         y              bl ≤ yl (t) ≤ bl
                                                                                                               ˆ
                                                                     ρl (t) =    1−ml                                    (10)
state management). DRQ does not require any change in TCP                      b (ˆl (t) − bl ) + ml bl ≤ yl (t) ≤ 2bl
                                                                                        y                     ˆ
                                                                              
                                                                                    l
stacks.                                                                          1                      yl (t) ≥ 2bl
                                                                                                        ˆ

                                                                        ˙        loge (1 − λl )          loge (1 − λl )
A. DRQ objective and optimality                                         yl (t) =
                                                                        ˆ                       yl (t) −
                                                                                                ˆ                       yl (t)       (11)
                                                                                       ηl                      ηl
   We describe a TCP/AQM network using the fluid model as
                                                                   where ml ∈(0, 1], 0 ≤ bl < bl and Eq. (11) is the continuous-
in the literature [23]-[28]. In the fluid model, the dynamics
                                                                   time representation of the EWMA filter [25] used by the RED,
whose timescale is shorter than several tens (or hundreds) of
                                                                   i.e.,
round-trip times (RTTs) are neglected. Instead, it is convenient
to study the longer timescale dynamics and so adequate to           yl ((k +1)ηl ) = (1−λl )ˆl (kηl )+λl yl (kηl ), λl ∈ (0, 1). (12)
                                                                    ˆ                       y
model the macroscopic dynamics of long-lived TCP flows that         Eq. (11) does not model the case where the averaging
we are concerning.                                                 timescale of the EWMA filter is smaller than the averaging
   Let xs (t) (bits/sec) be the average data rate of TCP           timescale ∆ on which yl (t) is defined. In this case, Eq. (11)
source s at time t where the average is taken over the             must be replaced by yl (t) = yl (t).
                                                                                            ˆ
time interval ∆ (seconds) and ∆ is assumed to be on the               Similarly, the RED queue at CPU k computes a pre-marking
order of tens (or hundreds) of RTTs, i.e., large enough to         probability σk (t) at time t by
average out the additive-increase and multiplicative decrease                 
(AIMD) oscillation of TCP. Define the RTT τs of source s                        0
                                                                               m                          vk (t) ≤ bk
                                                                                                           ˆ
                                                                              
                                                                              
by τs = τsi + τis where τsi denotes forward-path delay from
                                                                                       k
                                                                                   bk −bk
                                                                                          (ˆk (t) − bk )
                                                                                           v               bk ≤ vk (t) ≤ bk
                                                                                                                  ˆ
                                                                     σk (t) =      1−mk
source s to resource i and τis denotes backward-path delay                     b (ˆk (t) − bk ) + mk bk ≤ vk (t) ≤ 2bk
                                                                                          v                      ˆ
                                                                              
                                                                                     k
from resource i to source s.                                                      1                        vk (t) ≥ 2bk
                                                                                                           ˆ
                                                                                                                            (13)
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                                         5



     ˙        loge (1 − λk )          loge (1 − λk )                                        In the current Internet environment, however, these condi-
     vk (t) =
     ˆ                       vk (t) −
                             ˆ                       vk (t)                (14)
                    ηk                      ηk                                           tions will hardly be violated particularly as the bandwidth-
where vk (t) is the translation of zk (t) in bits.                                       delay products of flows increase. By applying C1 and C2 to
   Given these pre-marking probabilities, the objective of DRQ                           the Lagrangian optimality condition of Problem P in Eq. (6)
                                                                                                     √
                                                                                                       3/2Ms
is to mark (or discard) packets in such a way that the end-to-                           with αs =           , we have
                                                                                                        τs
end marking (or loss) probability ps (t) seen by each TCP flow
                                                                                                         ∗                     3/2
s at time t becomes                                                                                     rs τs
                                                                                                                =           k ∗                       ∗     (18)
                           k
                                                                               2                        Ms          k∈K(s) ws θk +          l∈L(s)   πl
                   k∈K(s) ws σk (t − τks ) +           l∈L(s) ρl (t − τls )
ps (t) =                                                                            2.
                                                                                                                           3/2
                              k
                                                                                                                >                                           (19)
           1+         k∈K(s) ws σk (t − τks ) +       l∈L(s) ρl (t − τls )                                          k∈K(s)
                                                                                                                            k
                                                                                                                           ws + |L(s)|
                                                                       (15)                      r∗ τ
The actual marking scheme that can closely approximate this                              where Mss is the bandwidth-delay product (or window size) of
                                                                                                  s


objective function will be given in Section III-B.                                       flow s, measured in packets. The maximum packet size in the
   The Reno/DRQ network model given by Eqs. (7)-(15) is                                  Internet is Ms = 1, 536 bytes (i.e., maximum Ethernet packet
called average Reno/DRQ network as the model describes the                               size). Flows that have the minimum processing density are IP
interaction between DRQ and Reno dynamics in long-term                                   forwarding applications with maximum packet size [17]. For
average rates rather than explicitly capturing instantaneous                             instance, a measurement study in [15] showed that per-packet
TCP rates in the AIMD form. This average network model                                   processing time required for NetBSD radix-tree routing table
enables us to study fixed-valued equilibrium and consequently                             lookup on a Pentium 167 MHz processor is 51 µs (for a
establish in an average sense the equilibrium equivalence of a                           faster CPU, the processing time reduces; so as what matters
Reno/DRQ network and a network with the same configuration                                is the number of cycles per bit, this estimate applies to the
but under dual-resource PF congestion control.                                           other CPUs). Thus, the processing density for this application
                                                                                                              k
   Let x = [x1 , · · · , xS ]T , σ                = [σ1 , · · · , σK ]T ,                flow is about ws =51(µsec)x167(MHz)/1,536(bytes)=0.69
ρ = [ρ1 , · · · , ρL ] , p = [p1 , · · · , pS ] , y = [y1 , · · · , yL ]T ,
                      T                        T                                         (cycles/bit). Therefore, from Eq. (19), ∗the worst-case lower
                                                                                                                                 rs τ
z = [z1 , · · · , zK ]T , v = [v1 , · · · , vK ]T , y = [ˆ1 , · · · , yL ]T
                                                    ˆ     y           ˆ                  bound on the window size becomes Mss > 1.77 (packets),
and v = [ˆ1 , · · · , vK ]T .
     ˆ    v           ˆ                                                                  which occurs when the flow traverses a CPU only in the
                                                                                         path (i.e., |K(s)| = 1 and |L(s)| = 0) . This concludes that
   Proposition 1: Consider an average Reno/DRQ network                                   the conditions C1 and C2 will never be violated as long as
given by Eqs. (7)-(15) and formulate the corresponding ag-                               the steady-state average TCP window size is sustainable at a
gregate log utility maximization problem (Problem P) as in
                         √                                                               value greater than or equal to 2 packets, even in the worst case.
                               s   3/2M
Eqs. (2)-(5) with αs =     τs    . If the Lagrange multiplier
          ∗       ∗
vectors, θ and π , of this corresponding Problem P satisfy
the following conditions:                                                                B. DRQ implementation
              C1 :       ∗
                        θk < 1, ∀k ∈ K(s), ∀s ∈ S,                         (16)             In this section, we present a simple scalable packet marking
                          ∗                                                              (or discarding) scheme that closely approximates the DRQ
              C2 :       πl < 1, ∀l ∈ L(s), ∀s ∈ S,                        (17)
                                                                                         objective function we laid out in Eq. (15).
then, the average Reno/DRQ network has a unique equilibrium
point (x∗ , σ ∗ , ρ∗ , p∗ , y ∗ , z ∗ , v ∗ , y ∗ , v ∗ ) and (x∗ , σ ∗ , ρ∗ ) is
                                              ˆ ˆ                                            A3: We assume that for all times
the primal-dual optimal solution of the corresponding Problem                                                                               2
                   ∗               ∗                       ∗
P. In addition, vk > bk if σk > 0 and 0 ≤ vk ≤ bk otherwise,                             
       ∗              ∗                         ∗
                                                                                                     k
                                                                                                    ws σk (t − τks ) +            ρl (t − τls )      1, ∀ s ∈ S.
and yl > bl if ρl > 0 and 0 ≤ yl ≤ bl otherwise, for all
                                                                                           k∈K(s)                        l∈L(s)
k ∈ K and l ∈ L.                                                                                                                                            (20)

      Proof: The proof is given in Appendix.                                             This assumption implies that (ws σk (t))2
                                                                                                                             k
                                                                                                                                           1, ∀k ∈ K,
    Proposition 1 implies that once the Reno/DRQ network                                 ρl (t) 2                                            k
                                                                                                        1, ∀l ∈ L, and any product of ws σk (t) and
reaches its steady state (i.e., equilibrium), the average data                           ρl (t) is also much smaller than 1. Note that our analysis
rates of Reno sources satisfy weighted proportional fairness
                       √                                                                 is based on long-term average values of σk (t) and ρl (t).
                         3/2Ms
with weights αs =         τs    . In addition, if a CPU k is a                           The typical operating points of TCP in the Internet during
                   ∗
bottleneck (i.e., σk > 0), its average equilibrium queue length                          steady state where TCP shows a reasonable performance are
  ∗
vk stays at a constant value greater than bk , and if not, it stays                      under low end-to-end loss probabilities (less than 1%) [29].
at a constant value between 0 and bk . The same is true for                              Since the end-to-end average probabilities are low, the marking
link congestion.                                                                         probabilities at individual links and CPUs can be much lower.
    The existence and uniqueness of such an equilibrium point                               Let R be the set of all the resources (including CPUs and
in the Reno/DRQ network is guaranteed if conditions C1                                   links) in the network. Also, for each flow s, let R(s) =
and C2 hold in the corresponding Problem P. Otherwise, the                               {1, · · · , |R(s)|} ⊂ R be the set of all the resources that it
Reno/DRQ networks do not have an equilibrium point.                                      traverses along its path and let i ∈ R(s) denote the i-th
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                                                            6



                                                                                           i in R(s). Then, the proposed ECN marking scheme can be
                                                                                           expressed by the following recursion. For i = 1, 2, · · · , |R(s)|,
When a packet arrives at resource i at                  time t:
                                                                                                 i                 i−1        i−1       i−1
    if (ECN = 11)                                                                               P11       =       P11 + (1 − P11 )δi + P10 (1 − δi )               i          (24)
                 set ECN to 11 with                     probability δi (t);                               = 1 − (1 − δi )(1 −                 i−1
                                                                                                                                             P11        i−1
                                                                                                                                                    − P10 i ),                (25)
    if (ECN == 00)                                                                               i
                                                                                                P10           i−1
                                                                                                          = P10 (1 − δi )(1 −                         i−1
                                                                                                                                               i ) + P00 (1 − δi ) i ,        (26)
                 set ECN to 10 with                     probability        i (t);
    else if (ECN == 10)
                                                                                                 i
                                                                                                P00       =       pi−1 (1 − δi )(1 − i )
                                                                                                                   00                                                         (27)
                 set ECN to 11 with                     probability        i (t);
                                                                                                                            0        0        0
                                                                                           with the initial condition that P00 = 1, P10 = 0, P11 = 0.
                                                                                             Evolving i from 0 to |R(s)|, we obtain
                                                                                                                                                
                                                                                                                  |R(s)|                       |R(s)| i−1
Fig. 3.    DRQ’s ECN marking algorithm                                                       |R(s)|
                                                                                            P11       = 1−                 (1 − δi ) 1 −                     i i   + Θ (28)
                                                                                                                   i=1                          i=2 i =1

resource along its path and indicate whether it is a CPU or a                              where Θ is the higher-order terms (order ≥ 3) of                              i ’s.   By
link. Then, some manipulation after applying Assumption A3                                 Assumption A3, we have
                                                                                                                                                                            
to Eq. (15) gives                                                                                                            |R(s)|                       |R(s)| i−1
                                                     2                                        |R(s)|
                                                                                              P11          ≈ 1−                       (1 − δi ) 1 −                   i i
                                                                                                                                                                             (29)
                                                                                                                               i=1                         i=2 i =1
 ps (t) ≈                 k
                          ws σk (t − τks ) +               ρl (t − τls )
                                                                                                                      |R(s)|           |R(s)| i−1
                 k∈K(s)                           l∈L(s)
                                                                                    (21)                   ≈                   δi +                                           (30)
      |R(s)|                      |R(s)| i−1                                                                                                        i i
                                                                                                                       i=1              i=2 i =1
  =             δi (t − τis ) +                i (t   − τis )   i   (t − τi s )
          i=1                      i=2 i =1
                                                                                           which concludes that the proposed ECN marking scheme
                                                                                           approximately implements the DRQ objective function in Eq.
where                                                                                                    |R(s)|
                                                                                           (21) since P11       = ps .
                        (ws σi (t))2
                            i
                                         if i indicates CPU                                   Disclaimer: DRQ requires alternative semantics for the
          δi (t) =                                                                  (22)
                        ρi (t)2          if i indicates link                               ECN field in the IP header, which are different from the default
and                                                                                        semantics defined in RFC 3168 [31]. What we have shown
                       √ i                                                                 here is that DRQ can be implemented using two-bit signaling
           i (t) =     √2ws σi (t) if i indicates CPU                               (23)   such as ECN. The coexistence of the default semantics and the
                        2ρi (t)    if i indicates link.                                    alternative semantics required by DRQ needs further study.
   Eq. (21) tells that each resource i ∈ R(s) (except the
first resource in R(s), i.e., i=1) contributes to ps (t) with                               C. DRQ stability
                                     i−1
two quantities, δi (t − τis ) and i =1 i (t − τis ) i (t − τi s ).                            In this section, we explore the stability of Reno/DRQ
Moreover, resource i can compute the former using its own                                  networks. Unfortunately, analyzing its global stability is an ex-
congestion information, i.e., σi (t) if it is a CPU or ρi (t)                              tremely difficult task since the dynamics involved are nonlinear
if it is a link, whereas it cannot compute the latter without                              and retarded. Here, we present a partial result concerning local
knowing the congestion information of its upstream resources                               stability, i.e., stability around the equilibrium point.
on its path (∀ l < l). That is, the latter requires an inter-                                 Define |R|x|S| matrix Γ(z) whose (i, s) element is given
resource signaling to exchange the congestion information.                                 by
For this reason, we refer to δi (t) as intra-resource marking                                             i −zτ
                                                                                                          ws e is if s ∈ S(i) and i indicates CPU
probability of resource i at time t and i (t) as inter-resource                              Γis (z) =       e−zτis       if s ∈ S(i) and i indicates link
marking probability of resource i at time t. We solve this intra-                                        
                                                                                                             0                       otherwise.
and inter-resource marking problem using two-bit ECN flags                                                                                                 (31)
without explicit communication between resources.                                             Proposition 2: An average Reno/DRQ network is locally
   Consider the two-bit ECN field in the IP header [30].                                    stable if we choose the RED parameters in DRQ such
Among the four possible values of ECN bits, we use three val-                              that max{ b mk , 1−mk }Ck ∈ (0, ψ), ∀k ∈ K, and
                                                                                                            −b      b
                                                                                                          k                k
ues to indicate three cases: initial state (ECN=00), signaling-                                                   k
                                                                                           max{ b ml , 1−ml }Bl ∈ (0, ψ), ∀l ∈ L, and
marked (ECN=10) and congestion-marked (ECN=11). When                                              −b
                                                                                                  l   l b     l

a packet is congestion-marked (ECN=11), the packet is either                                                                      3/2φ2 [Γ(0)]
                                                                                                                                      min
marked (if TCP supports ECN) or discarded (if not). DRQ sets                                                          ψ≤                                                      (32)
                                                                                                                               |R|Λ2 τmax wmax
                                                                                                                                   max
                                                                                                                                       3
the ECN bits as shown in Figure 3.
   Below, we verify that the ECN marking scheme in Figure 3
                                                                                                                        max{Ck }
                                                                                           where Λmax = max{ min{wk } min{Ms } , min{Ms} }, τmax =
                                                                                                                                      max{Bl
                                                                                                                                             }
                                                                                                                          s
approximately implements the objective function in Eq. (21).                                                            k
                                                                                           max{τs }, wmax = max{ws , 1} and φmin [Γ(0)] denotes the
Consider a flow s with path R(s). For now, we drop the time                                 smallest singular values of the matrix Γ(z) evaluated at z = 0.
                                         i   i     i
index t to simplify the notation. Let P00 , P10 , P11 respectively                               Proof: The proof is given in Appendix and it is a
denote the probabilities that packets of flow s will have                                   straightforward application of the TCP/RED stability result in
ECN=00, ECN=10, ECN=11, upon departure from resource                                       [32].
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                                                              7


                                                                                                        w=0.25      1     5ms                                      5ms          1
                      IV. P ERFORMANCE
                                                                                                        w=0.50      2                          L1
A. Simulation setup                                                                                     w=1.00      3
                                                                                                                              R1                            R2
                                                                                                                                            40Mbps
   In this section, we use simulation to verify the performance                                         w=2.00      4                        10ms                               4
of DRQ in the dual-resource environment with TCP Reno                                                             : 10 TCP sources                          : TCP sink
sources. We compare the performance of DRQ with that of the
two other AQM schemes that we discussed in the introduction.      Fig. 4.                        Single link scenario in dumbell topology
One scheme is to use the simplest approach where both CPU
and link queues use RED and the other is to use DRR (a                                                  2.2




                                                                            Average throughput (Mbps)
                                                                                                        2.0   Processing-limited         Jointly-limited    Bandwidth-limited
variant of WFQ) to schedule CPU usage among competing                                                   1.8
flows according to the processing density of each flow. DRR                                               1.6
                                                                                                        1.4
maintains per flow queues, and equalizes the CPU usage in a                                              1.2
round robin fashion when the processing demand is higher                                                1.0
                                                                                                        0.8
than the CPU capacity (i.e., CPU-limited). In some sense,                                               0.6                                                     SG1, w = 0.25
                                                                                                                                                                SG2, w = 0.50
these choices of AQM are two extreme; one is simple, but                                                0.4
                                                                                                        0.2
                                                                                                                                                                SG3, w = 1.00
                                                                                                                                                                SG4, w = 2.00
less fair in use of CPU as RED is oblivious to differing CPU                                              0
                                                                                                                    10             20          30          40             50
demands of flows and the other is complex, but fair in use                                                                          CPU capacity (Mcycles/sec)
of CPU as DRR installs equal shares of CPU among these                                                                             (a) RED-RED
flows. Our goal is to demonstrate through simulation that DRQ                                            2.2




                                                                            Average throughput (Mbps)
using two FIFO queues always offers provable fairness and                                               2.0   Processing-limited         Jointly-limited    Bandwidth-limited
                                                                                                        1.8
efficiency, which is defined as the dual-resource PF allocation.                                          1.6
Note that all three schemes use RED for link queues, but DRQ                                            1.4
                                                                                                        1.2
uses its own marking algorithm for link queues as shown in                                              1.0
Figure 3 which uses the marking probability obtained from the                                           0.8
                                                                                                        0.6                                                     SG1, w = 0.25
underlying RED queue for link queues. We call the scheme                                                0.4                                                     SG2, w = 0.50
                                                                                                                                                                SG3, w = 1.00
with DRR for CPU queues and RED for link queues, DRR-                                                   0.2
                                                                                                          0
                                                                                                                                                                SG4, w = 2.00

RED, the scheme with RED for CPU queues and RED for                                                                 10             20          30          40             50
                                                                                                                                   CPU capacity (Mcycles/sec)
link queues, RED-RED.
   The simulation is performed in the NS-2 [33] environment.                                                                       (b) DRR-RED
We modified NS-2 to emulate the CPU capacity by simply                                                   2.2
                                                                            Average throughput (Mbps)




                                                                                                        2.0   Processing-limited         Jointly-limited    Bandwidth-limited
holding a packet for its processing time duration. In the                                               1.8
simulation, TCP-NewReno sources are used at end hosts and                                               1.6
                                                                                                        1.4
RED queues are implemented using its default setting for the                                            1.2
                                                                                                        1.0
“gentle” RED mode [34] (mi = 0.1, bi = 50 pkts, bi = 550                                                0.8
pkts and λi = 10−4 . The packet size is fixed at 500 Bytes).                                             0.6                                                     SG1, w = 0.25
                                                                                                                                                                SG2, w = 0.50
                                                                                                        0.4
The same RED setting is used for the link queues of DRR-                                                0.2
                                                                                                                                                                SG3, w = 1.00
                                                                                                                                                                SG4, w = 2.00
RED and RED-RED, and also for both CPU and link queues                                                    0
                                                                                                                    10             20          30          40             50
of DRQ (DRQ uses a function of the marking probabilities                                                                           CPU capacity (Mcycles/sec)
to mark or drop packets for both queues). In our analytical                                                                             (c) DRQ
model, we separate CPU and link. To simplify the simulation
                                                                  Fig. 5. Average throughput of four different classes of long-lived TCP flows
setup and its description, when we refer to a “link” for the      in the dumbell topology. Each class has a different CPU demand per bit (w).
simulation setup, we assume that each link l consists of one      No other background traffic is added. The Dotted lines indicate the ideal PF
CPU and one Tx link (i.e., bandwidth).                            rate allocation for each class. In the figure, we find that DRQ and DRR-RED
                                                                  show good fairness under the CPU-limited region while RED-RED does not.
   By adjusting CPU capacity Cl , link bandwidth Bl , and the     Vertical bars indicate 95% confidence intervals.
amount of background traffic, we can control the bottleneck
conditions. Our simulation topologies are chosen from a vari-
ous set of Internet topologies from simple dumbell topologies     region to the BW-limited region. Four classes of long-lived
to more complex WAN topologies. Below we discuss these se-        TCP flows are added for simulation whose processing densities
tups and simulation scenarios in detail and their corresponding   are 0.25, 0.5, 1.0 and 2.0 respectively. We simulate ten TCP
results for the three schemes we discussed above.                 Reno flows for each class. All the flows have the same RTT
                                                                  of 40 ms.
B. Dumbell with long-lived TCP flows                                  In presenting our results, we take the average throughput
   To confirm our analysis in Section II-B, we run a single link   of TCP flows that belong to the same class. Figure 5 plots
bottleneck case. Figure 4 shows an instance of the dumbell        the average throughput of each class. To see whether DRQ
topology commonly used in congestion control research. We         achieves PF allocation, we also plot the ideal proportional fair
fix the bandwidth of the bottleneck link to 40 Mbps and vary       rate for each class (which is shown in a dotted line). As shown
its CPU capacity from 5 Mcycles/s to 55 Mcycles/s. This           in Figure 5(a), where we use typical RED schemes at both
variation allows the bottleneck to move from the CPU-limited      queues, all TCP flows achieve the same throughput regardless
IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008                                                                                                                            8


                               1                                                                              0.8
                              0.9
      Bandwidth utilization
                                                                                                              0.7




                                                                                  Normalized CPU sharing of
                              0.8                                                                             0.6




                                                                                  high processing flows
                              0.7                                                                             0.5
                              0.6
                                                                                                              0.4
                              0.5
                                                                                                              0.3
                              0.4
                                                                    DRQ                                       0.2
                              0.3                                DRR-RED                                                                                          DRQ
                              0.2                                RED-RED                                      0.1                                              DRR-RED
                                                                                                                                                               RED-RED
                              0.1                                                                                0
                                    10   20       30        40         50                                            1    2    3        4     5     6      7    8        9   10
                                         CPU capacity (Mcycles/sec)                                                                Number of high processing flows
                                                                                                                                      (a) CPU sharing
Fig. 6.     Comparison of bandwidth utilization in the Dumbbell single                                      40




                                                                                  Total throughput (Mbps)
bottleneck topology. RED-RED achieves far less bandwidth utilization than
DRR-RED and DRQ when CPU becomes a bottleneck.                                                              35

                                                                                                            30

                                                                                                            25
of the CPU capacity of the link and their processing densities.
                                                                                                            20
Figures 5(b) and (c) show that the average throughput curves
                                                                                                                        DRQ
of DRR-RED and DRQ follow the ideal PF rates reasonably                                                     15       DRR-RED
                                                                                                                     RED-RED
well. When CPU is only a bottleneck resource, the PF rate of                                                10
                                                                                                                 1        2    3        4     5     6      7    8        9   10
each flow must be inversely proportional to its processing den-                                                                     Number of high processing flows
sity ws , in order to share CPU equally. Under the BW-limited                                                                       (b) Total throughput
region, the proportionally-fair rate of each flow is identical to
the equal share of the bandwidth. Under the jointly-limited                 Fig. 7. Impact of high processing flows. As the number of high processing
                                                                            flows increase, the network becomes more CPU-bound. Under RED-RED,
region, flows maintain the PF rates while fully utilizing both               these flows can dominate the use of CPU, reaching about 80% CPU usage
resources. Although DRQ does not employ the per-flow queue                   with only 10 flows, starving 40 competing, but low processing flows.
structure as DRR, its performance is comparable to that of
DRR-RED.
   Figure 6 shows that the aggregate throughput achieved                    link is in the jointly-limited region which is the reason why
by each scheme. It shows that RED-RED has much lower                        the CPU share of high-processing flows go beyond 20%.
bandwidth utilization than the two other schemes. This is be-
cause, as discussed in Section II-B, when CPU is a bottleneck               D. Dumbell with background Internet traffic
resource, the equilibrium operating points of TCP flows over                    No Internet links are without cross traffic. In order to
the RED CPU queue that achieve the equal bandwidth usage                    emulate more realistic Internet environments, we add cross
while keeping the total CPU usage below the CPU capacity                    traffic modelled from various observations on RTT distribu-
are much lower than those of the other schemes that need to                 tion [35], flow sizes [36] and flow arrival [37]. As modelling
ensure the equal sharing of CPU (not the bandwidth) under                   the Internet traffic in itself is a topic of research, we do not
the CPU-limited region.                                                     dwell on which model is more realistic. In this paper, we
                                                                            present one model that contains the statistical characteristics
C. Impact of flows with high processing demands                              that are commonly assumed or confirmed by researchers.
   In the introduction, we indicated that RED-RED can cause                 These characteristics include that the distribution of flow
extreme unfairness in use of resources. To show this by                     sizes has a long-range dependency [36], [38], the RTTs of
experiment, we construct a simulation run where we fix the                   flows is rather exponentially distributed [39] and the arrivals
CPU capacity to 40 Mcycles/s and add an increasing number                   of flows are exponentially distributed [37]. Following these
of flows with a high CPU demand (ws = 10) in the same setup                  characteristics, our cross traffic consists of a number of short
as the dumbell sink bottleneck environment in Section IV-B.                 and medium-lived TCP flows that follow a Poisson arrival
We call these flows high processing flows. From Figure 5, at                  process and send a random number of packets derived from
40 Mcycles/s, when no high processing flows are added, CPU                   a hybrid distribution of Lognormal (body) and Pareto (tail)
is not a bottleneck. But as the number of high processing                   distributions with cutoff 133KB (55% of packets are from
flows increases, the network moves into the CPU-limited                      flows larger than the cutoff size). We set the parameters of
region. Figure 7 shows the results of this simulation run. In               flow sizes identical to those from Internet traffic characteristics
Figure 7 (a), as we increase the number of high processing                  in [36] (Lognormal:µ = 9.357, σ = 1.318, Pareto:α = 1.1), so
flows, the aggregate CPU share of high processing flows                       that a continuous distribution of flow sizes is included in the
deviates significantly from the equal CPU share; under a larger              background traffic. Furthermore, we also generate reverse-path
number of high processing flows (e.g., 10 flows), these flows                  traffic consisting of 10 long-lived TCP flows and a number of
dominate the CPU usage over the other lower processing                      short and medium-lived flows to increase realism and also to
density flows, driving them to starvation. In contrast, DRQ and              reduce the phase effect. The RTT of each cross traffic flow
DRR approximately implement the equal CPU sharing policy.                   is randomly selected from a range of 20 to 60 ms. We fix
Even though the number of high processing flows increases,                   the bottleneck bandwidth and CPU capacities to 40 Mbps
the bandwidth remains a bottleneck resource as before, so the               and 40 Mcycles/s, respectively, and generate the same number
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networks

Contenu connexe

Tendances

Iisrt arunkumar b (networks)
Iisrt arunkumar b (networks)Iisrt arunkumar b (networks)
Iisrt arunkumar b (networks)IISRT
 
MuMHR: Multi-path, Multi-hop Hierarchical Routing
MuMHR: Multi-path, Multi-hop Hierarchical RoutingMuMHR: Multi-path, Multi-hop Hierarchical Routing
MuMHR: Multi-path, Multi-hop Hierarchical RoutingM H
 
Enhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networksEnhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networksIRJET Journal
 
M phil-computer-science-mobile-computing-projects
M phil-computer-science-mobile-computing-projectsM phil-computer-science-mobile-computing-projects
M phil-computer-science-mobile-computing-projectsVijay Karan
 
M.E Computer Science Mobile Computing Projects
M.E Computer Science Mobile Computing ProjectsM.E Computer Science Mobile Computing Projects
M.E Computer Science Mobile Computing ProjectsVijay Karan
 
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...Analysis of Link State Resource Reservation Protocol for Congestion Managemen...
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...ijgca
 
Analysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless TechnologiesAnalysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless TechnologiesIOSR Journals
 
Q-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNS
Q-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNSQ-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNS
Q-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNSIJCNCJournal
 
Improving Performance of TCP in Wireless Environment using TCP-P
Improving Performance of TCP in Wireless Environment using TCP-PImproving Performance of TCP in Wireless Environment using TCP-P
Improving Performance of TCP in Wireless Environment using TCP-PIDES Editor
 
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...IDES Editor
 
Differentiated Classes of Service and Flow Management using An Hybrid Broker1
Differentiated Classes of Service and Flow Management using An Hybrid Broker1Differentiated Classes of Service and Flow Management using An Hybrid Broker1
Differentiated Classes of Service and Flow Management using An Hybrid Broker1IDES Editor
 
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSN
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSNPROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSN
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSNIJCNCJournal
 
Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks IJECEIAES
 
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERS
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERS
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSNexgen Technology
 
Towards Seamless TCP Congestion Avoidance in Multiprotocol Environments
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsTowards Seamless TCP Congestion Avoidance in Multiprotocol Environments
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
 
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETS
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSAN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETS
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSIJCNCJournal
 

Tendances (19)

Iisrt arunkumar b (networks)
Iisrt arunkumar b (networks)Iisrt arunkumar b (networks)
Iisrt arunkumar b (networks)
 
MuMHR: Multi-path, Multi-hop Hierarchical Routing
MuMHR: Multi-path, Multi-hop Hierarchical RoutingMuMHR: Multi-path, Multi-hop Hierarchical Routing
MuMHR: Multi-path, Multi-hop Hierarchical Routing
 
Enhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networksEnhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networks
 
M phil-computer-science-mobile-computing-projects
M phil-computer-science-mobile-computing-projectsM phil-computer-science-mobile-computing-projects
M phil-computer-science-mobile-computing-projects
 
M.E Computer Science Mobile Computing Projects
M.E Computer Science Mobile Computing ProjectsM.E Computer Science Mobile Computing Projects
M.E Computer Science Mobile Computing Projects
 
Ijcnc050203
Ijcnc050203Ijcnc050203
Ijcnc050203
 
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...Analysis of Link State Resource Reservation Protocol for Congestion Managemen...
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...
 
Analysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless TechnologiesAnalysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless Technologies
 
Q-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNS
Q-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNSQ-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNS
Q-LEARNING BASED ROUTING PROTOCOL TO ENHANCE NETWORK LIFETIME IN WSNS
 
Improving Performance of TCP in Wireless Environment using TCP-P
Improving Performance of TCP in Wireless Environment using TCP-PImproving Performance of TCP in Wireless Environment using TCP-P
Improving Performance of TCP in Wireless Environment using TCP-P
 
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...
 
Differentiated Classes of Service and Flow Management using An Hybrid Broker1
Differentiated Classes of Service and Flow Management using An Hybrid Broker1Differentiated Classes of Service and Flow Management using An Hybrid Broker1
Differentiated Classes of Service and Flow Management using An Hybrid Broker1
 
87
8787
87
 
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSN
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSNPROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSN
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSN
 
94
9494
94
 
Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks
 
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERS
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERS
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERS
 
Towards Seamless TCP Congestion Avoidance in Multiprotocol Environments
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsTowards Seamless TCP Congestion Avoidance in Multiprotocol Environments
Towards Seamless TCP Congestion Avoidance in Multiprotocol Environments
 
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETS
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSAN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETS
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETS
 

En vedette

Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...
Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...
Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...ambitlick
 
Accurate and Energy-Efficient Range-Free Localization for Mobile Sensor Networks
Accurate and Energy-Efficient Range-Free Localization for Mobile Sensor NetworksAccurate and Energy-Efficient Range-Free Localization for Mobile Sensor Networks
Accurate and Energy-Efficient Range-Free Localization for Mobile Sensor Networksambitlick
 
Adaptive location oriented content delivery in
Adaptive location oriented content delivery inAdaptive location oriented content delivery in
Adaptive location oriented content delivery inambitlick
 
Exploring the dynamic nature of mobile nodes for predicting route lifetime in...
Exploring the dynamic nature of mobile nodes for predicting route lifetime in...Exploring the dynamic nature of mobile nodes for predicting route lifetime in...
Exploring the dynamic nature of mobile nodes for predicting route lifetime in...ambitlick
 
Parallelizing itinerary based knn query
Parallelizing itinerary based knn queryParallelizing itinerary based knn query
Parallelizing itinerary based knn queryambitlick
 
Delay bounds of chunk based peer-to-peer
Delay bounds of chunk based peer-to-peerDelay bounds of chunk based peer-to-peer
Delay bounds of chunk based peer-to-peerambitlick
 
Record matching over query results
Record matching over query resultsRecord matching over query results
Record matching over query resultsambitlick
 
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...ambitlick
 
Node clustering in wireless sensor
Node clustering in wireless sensorNode clustering in wireless sensor
Node clustering in wireless sensorambitlick
 
DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...
DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...
DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...ambitlick
 
Communication cost minimization in wireless sensor and actor networks for roa...
Communication cost minimization in wireless sensor and actor networks for roa...Communication cost minimization in wireless sensor and actor networks for roa...
Communication cost minimization in wireless sensor and actor networks for roa...ambitlick
 
Efficient load aware routing scheme
Efficient load aware routing schemeEfficient load aware routing scheme
Efficient load aware routing schemeambitlick
 
Créer des contenus pour le web
Créer des contenus pour le webCréer des contenus pour le web
Créer des contenus pour le webVirginie Colombel
 
Brand content sur Doctissimo
Brand content sur DoctissimoBrand content sur Doctissimo
Brand content sur DoctissimoAhuDocti
 

En vedette (17)

Multimedia
MultimediaMultimedia
Multimedia
 
Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...
Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...
Adaptation of Reputation Management Systems to Dynamic Network Conditions in ...
 
Accurate and Energy-Efficient Range-Free Localization for Mobile Sensor Networks
Accurate and Energy-Efficient Range-Free Localization for Mobile Sensor NetworksAccurate and Energy-Efficient Range-Free Localization for Mobile Sensor Networks
Accurate and Energy-Efficient Range-Free Localization for Mobile Sensor Networks
 
Adaptive location oriented content delivery in
Adaptive location oriented content delivery inAdaptive location oriented content delivery in
Adaptive location oriented content delivery in
 
Exploring the dynamic nature of mobile nodes for predicting route lifetime in...
Exploring the dynamic nature of mobile nodes for predicting route lifetime in...Exploring the dynamic nature of mobile nodes for predicting route lifetime in...
Exploring the dynamic nature of mobile nodes for predicting route lifetime in...
 
Parallelizing itinerary based knn query
Parallelizing itinerary based knn queryParallelizing itinerary based knn query
Parallelizing itinerary based knn query
 
Delay bounds of chunk based peer-to-peer
Delay bounds of chunk based peer-to-peerDelay bounds of chunk based peer-to-peer
Delay bounds of chunk based peer-to-peer
 
Record matching over query results
Record matching over query resultsRecord matching over query results
Record matching over query results
 
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
 
Ns2leach
Ns2leachNs2leach
Ns2leach
 
Node clustering in wireless sensor
Node clustering in wireless sensorNode clustering in wireless sensor
Node clustering in wireless sensor
 
DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...
DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...
DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency...
 
Communication cost minimization in wireless sensor and actor networks for roa...
Communication cost minimization in wireless sensor and actor networks for roa...Communication cost minimization in wireless sensor and actor networks for roa...
Communication cost minimization in wireless sensor and actor networks for roa...
 
Efficient load aware routing scheme
Efficient load aware routing schemeEfficient load aware routing scheme
Efficient load aware routing scheme
 
initiation a l'economie 2.0
initiation a l'economie 2.0initiation a l'economie 2.0
initiation a l'economie 2.0
 
Créer des contenus pour le web
Créer des contenus pour le webCréer des contenus pour le web
Créer des contenus pour le web
 
Brand content sur Doctissimo
Brand content sur DoctissimoBrand content sur Doctissimo
Brand content sur Doctissimo
 

Similaire à Dual-resource TCPAQM for Processing-constrained Networks

ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...
ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...
ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...ijgca
 
Project DRAC: Creating an applications-aware network
Project DRAC: Creating an applications-aware networkProject DRAC: Creating an applications-aware network
Project DRAC: Creating an applications-aware networkTal Lavian Ph.D.
 
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...IRJET Journal
 
Comparative Analysis of Green Algorithm within Active Queue Management for Mo...
Comparative Analysis of Green Algorithm within Active Queue Management for Mo...Comparative Analysis of Green Algorithm within Active Queue Management for Mo...
Comparative Analysis of Green Algorithm within Active Queue Management for Mo...ijtsrd
 
Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...
Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...
Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...M H
 
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESHYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESijcsit
 
DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
DWDM-RAM: Enabling Grid Services with Dynamic Optical NetworksDWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
DWDM-RAM: Enabling Grid Services with Dynamic Optical NetworksTal Lavian Ph.D.
 
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...IOSR Journals
 
A novel routing technique for mobile ad hoc networks (manet)
A novel routing technique for mobile ad hoc networks (manet)A novel routing technique for mobile ad hoc networks (manet)
A novel routing technique for mobile ad hoc networks (manet)ijngnjournal
 
A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...
A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...
A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...ijdpsjournal
 
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANET
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANETCross Layer- Performance Enhancement Architecture (CL-PEA) for MANET
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANETijcncs
 
Defeating jamming with the power of silence a gametheoretic analysis
Defeating jamming with the power of silence a gametheoretic analysisDefeating jamming with the power of silence a gametheoretic analysis
Defeating jamming with the power of silence a gametheoretic analysisranjith kumar
 
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...IJCNCJournal
 
Call Admission Control (CAC) with Load Balancing Approach for the WLAN Networks
Call Admission Control (CAC) with Load Balancing Approach for the WLAN NetworksCall Admission Control (CAC) with Load Balancing Approach for the WLAN Networks
Call Admission Control (CAC) with Load Balancing Approach for the WLAN NetworksIJARIIT
 
Recital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless networkRecital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
 
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...Tal Lavian Ph.D.
 
An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos TechniquesKatie Gulley
 
A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...
A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...
A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...GiselleginaGloria
 

Similaire à Dual-resource TCPAQM for Processing-constrained Networks (20)

ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...
ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...
ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...
 
Project DRAC: Creating an applications-aware network
Project DRAC: Creating an applications-aware networkProject DRAC: Creating an applications-aware network
Project DRAC: Creating an applications-aware network
 
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...
 
Comparative Analysis of Green Algorithm within Active Queue Management for Mo...
Comparative Analysis of Green Algorithm within Active Queue Management for Mo...Comparative Analysis of Green Algorithm within Active Queue Management for Mo...
Comparative Analysis of Green Algorithm within Active Queue Management for Mo...
 
Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...
Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...
Adaptive Routing in Wireless Sensor Networks: QoS Optimisation for Enhanced A...
 
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESHYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
 
DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
DWDM-RAM: Enabling Grid Services with Dynamic Optical NetworksDWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
 
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...
 
A novel routing technique for mobile ad hoc networks (manet)
A novel routing technique for mobile ad hoc networks (manet)A novel routing technique for mobile ad hoc networks (manet)
A novel routing technique for mobile ad hoc networks (manet)
 
A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...
A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...
A Bandwidth Efficient Scheduling Framework for Non Real Time Applications in ...
 
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANET
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANETCross Layer- Performance Enhancement Architecture (CL-PEA) for MANET
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANET
 
Defeating jamming with the power of silence a gametheoretic analysis
Defeating jamming with the power of silence a gametheoretic analysisDefeating jamming with the power of silence a gametheoretic analysis
Defeating jamming with the power of silence a gametheoretic analysis
 
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...
 
C0351725
C0351725C0351725
C0351725
 
Call Admission Control (CAC) with Load Balancing Approach for the WLAN Networks
Call Admission Control (CAC) with Load Balancing Approach for the WLAN NetworksCall Admission Control (CAC) with Load Balancing Approach for the WLAN Networks
Call Admission Control (CAC) with Load Balancing Approach for the WLAN Networks
 
Recital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless networkRecital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless network
 
U01725129138
U01725129138U01725129138
U01725129138
 
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
 
An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos Techniques
 
A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...
A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...
A Professional QoS Provisioning in the Intra Cluster Packet Level Resource Al...
 

Plus de ambitlick

Low cost Java 2013 IEEE projects
Low cost Java 2013 IEEE projectsLow cost Java 2013 IEEE projects
Low cost Java 2013 IEEE projectsambitlick
 
Ambitlick ns2 2013
Ambitlick ns2 2013Ambitlick ns2 2013
Ambitlick ns2 2013ambitlick
 
Low cost Java IEEE Projects 2013
Low cost Java IEEE Projects 2013Low cost Java IEEE Projects 2013
Low cost Java IEEE Projects 2013ambitlick
 
Handling selfishness in replica allocation
Handling selfishness in replica allocationHandling selfishness in replica allocation
Handling selfishness in replica allocationambitlick
 
Mutual distance bounding protocols
Mutual distance bounding protocolsMutual distance bounding protocols
Mutual distance bounding protocolsambitlick
 
Moderated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroupsModerated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroupsambitlick
 
Efficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secretsEfficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secretsambitlick
 
IEEE -2012-13 Projects IN NS2
IEEE -2012-13 Projects IN NS2  IEEE -2012-13 Projects IN NS2
IEEE -2012-13 Projects IN NS2 ambitlick
 
Adaptive weight factor estimation from user review 1
Adaptive weight factor estimation from user   review 1Adaptive weight factor estimation from user   review 1
Adaptive weight factor estimation from user review 1ambitlick
 
Integrated institutional portal
Integrated institutional portalIntegrated institutional portal
Integrated institutional portalambitlick
 
Mutual distance bounding protocols
Mutual distance bounding protocolsMutual distance bounding protocols
Mutual distance bounding protocolsambitlick
 
Moderated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroupsModerated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroupsambitlick
 
Efficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secretsEfficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secretsambitlick
 
Comments on “mabs multicast authentication based on batch signature”
Comments on “mabs multicast authentication based on batch signature”Comments on “mabs multicast authentication based on batch signature”
Comments on “mabs multicast authentication based on batch signature”ambitlick
 
Energy efficient protocol for deterministic
Energy efficient protocol for deterministicEnergy efficient protocol for deterministic
Energy efficient protocol for deterministicambitlick
 
Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...
Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...
Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...ambitlick
 
A Privacy-Preserving Location Monitoring System for Wireless Sensor Networks
A Privacy-Preserving Location Monitoring System for Wireless Sensor NetworksA Privacy-Preserving Location Monitoring System for Wireless Sensor Networks
A Privacy-Preserving Location Monitoring System for Wireless Sensor Networksambitlick
 
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...ambitlick
 

Plus de ambitlick (20)

Low cost Java 2013 IEEE projects
Low cost Java 2013 IEEE projectsLow cost Java 2013 IEEE projects
Low cost Java 2013 IEEE projects
 
Ambitlick ns2 2013
Ambitlick ns2 2013Ambitlick ns2 2013
Ambitlick ns2 2013
 
Low cost Java IEEE Projects 2013
Low cost Java IEEE Projects 2013Low cost Java IEEE Projects 2013
Low cost Java IEEE Projects 2013
 
Handling selfishness in replica allocation
Handling selfishness in replica allocationHandling selfishness in replica allocation
Handling selfishness in replica allocation
 
Mutual distance bounding protocols
Mutual distance bounding protocolsMutual distance bounding protocols
Mutual distance bounding protocols
 
Moderated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroupsModerated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroups
 
Efficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secretsEfficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secrets
 
IEEE -2012-13 Projects IN NS2
IEEE -2012-13 Projects IN NS2  IEEE -2012-13 Projects IN NS2
IEEE -2012-13 Projects IN NS2
 
Adaptive weight factor estimation from user review 1
Adaptive weight factor estimation from user   review 1Adaptive weight factor estimation from user   review 1
Adaptive weight factor estimation from user review 1
 
Integrated institutional portal
Integrated institutional portalIntegrated institutional portal
Integrated institutional portal
 
Embassy
EmbassyEmbassy
Embassy
 
Crm
Crm Crm
Crm
 
Mutual distance bounding protocols
Mutual distance bounding protocolsMutual distance bounding protocols
Mutual distance bounding protocols
 
Moderated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroupsModerated group authoring system for campus wide workgroups
Moderated group authoring system for campus wide workgroups
 
Efficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secretsEfficient spread spectrum communication without pre shared secrets
Efficient spread spectrum communication without pre shared secrets
 
Comments on “mabs multicast authentication based on batch signature”
Comments on “mabs multicast authentication based on batch signature”Comments on “mabs multicast authentication based on batch signature”
Comments on “mabs multicast authentication based on batch signature”
 
Energy efficient protocol for deterministic
Energy efficient protocol for deterministicEnergy efficient protocol for deterministic
Energy efficient protocol for deterministic
 
Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...
Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...
Estimating Parameters of Multiple Heterogeneous Target Objects Using Composit...
 
A Privacy-Preserving Location Monitoring System for Wireless Sensor Networks
A Privacy-Preserving Location Monitoring System for Wireless Sensor NetworksA Privacy-Preserving Location Monitoring System for Wireless Sensor Networks
A Privacy-Preserving Location Monitoring System for Wireless Sensor Networks
 
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
Energy-Efficient Protocol for Deterministic and Probabilistic Coverage In Sen...
 

Dernier

Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfErwinPantujan2
 
TEACHER REFLECTION FORM (NEW SET........).docx
TEACHER REFLECTION FORM (NEW SET........).docxTEACHER REFLECTION FORM (NEW SET........).docx
TEACHER REFLECTION FORM (NEW SET........).docxruthvilladarez
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSMae Pangan
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfPatidar M
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
Presentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxPresentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxRosabel UA
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfTechSoup
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4JOYLYNSAMANIEGO
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 

Dernier (20)

Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
 
TEACHER REFLECTION FORM (NEW SET........).docx
TEACHER REFLECTION FORM (NEW SET........).docxTEACHER REFLECTION FORM (NEW SET........).docx
TEACHER REFLECTION FORM (NEW SET........).docx
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHS
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdf
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptxINCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
 
Presentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxPresentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 

Dual-resource TCPAQM for Processing-constrained Networks

  • 1. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 1 Dual-resource TCP/AQM for Processing-constrained Networks Minsu Shin, Student Member, IEEE, Song Chong, Member, IEEE, and Injong Rhee, Senior Member, IEEE Abstract—This paper examines congestion control issues for processing capacity in the network components. New router TCP flows that require in-network processing on the fly in technologies such as extensible routers [3] or programmable network elements such as gateways, proxies, firewalls and even routers [4] also need to deal with scheduling of CPU usage routers. Applications of these flows are increasingly abundant in the future as the Internet evolves. Since these flows require use of per packet as well as bandwidth usage per packet. Moreover, CPUs in network elements, both bandwidth and CPU resources the standardization activities to embrace various network ap- can be a bottleneck and thus congestion control must deal plications especially at network edges are found in [5] [6] as with “congestion” on both of these resources. In this paper, we the name of Open Pluggable Edge Services. show that conventional TCP/AQM schemes can significantly lose In this paper, we examine congestion control issues for throughput and suffer harmful unfairness in this environment, particularly when CPU cycles become more scarce (which is likely an environment where both bandwidth and CPU resources the trend given the recent explosive growth rate of bandwidth). As can be a bottleneck. We call this environment dual-resource a solution to this problem, we establish a notion of dual-resource environment. In the dual-resource environment, different flows proportional fairness and propose an AQM scheme, called Dual- could have different processing demands per byte. Resource Queue (DRQ), that can closely approximate propor- Traditionally, congestion control research has focused on tional fairness for TCP Reno sources with in-network processing requirements. DRQ is scalable because it does not maintain per- managing only bandwidth. However, we envision (also it is flow states while minimizing communication among different indeed happening now to some degree) that diverse network resource queues, and is also incrementally deployable because services reside somewhere inside the network, most likely at of no required change in TCP stacks. The simulation study the edge of the Internet, processing, storing or forwarding shows that DRQ approximates proportional fairness without data packets on the fly. As the in-network processing is likely much implementation cost and even an incremental deployment of DRQ at the edge of the Internet improves the fairness and to be popular in the future, our work that examines whether throughput of these TCP flows. Our work is at its early stage and the current congestion control theory can be applied without might lead to an interesting development in congestion control modification, or if not, then what scalable solutions can be research. applied to fix the problem, is highly timely. Index Terms—TCP-AQM, transmission link capacity, CPU In our earlier work [7], we extended proportional fairness capacity, fairness, efficiency, proportional fairness. to the dual-resource environment and proposed a distributed congestion control protocol for the same environment where I. I NTRODUCTION end-hosts are cooperative and explicit signaling is available for congestion control. In this paper, we propose a scal- A DVANCES in optical network technology enable fast pace increase in physical bandwidth whose growth rate has far surpassed that of other resources such as CPU and able active queue management (AQM) strategy, called Dual- Resource Queue (DRQ), that can be used by network routers to approximate proportional fairness, without requiring any memory bus. This phenomenon causes network bottlenecks change in end-host TCP stacks. Since it does not require any to shift from bandwidth to other resources. The rise of new change in TCP stacks, our solution is incrementally deployable applications that require in-network processing hastens this in the current Internet. Furthermore, DRQ is highly scalable in shift, too. For instance, a voice-over-IP call made from a cell the number of flows it can handle because it does not maintain phone to a PSTN phone must go through a media gateway that per-flow states or queues. DRQ maintains only one queue per performs audio transcoding “on the fly” as the two end points resource and works with classes of application flows whose often use different audio compression standards. Examples processing requirements are a priori known or measurable. of in-network processing services are increasingly abundant Resource scheduling and management of one resource from security, performance-enhancing proxies (PEP), to media type in network environments where different flows could translation [1] [2]. These services add additional loads to have different demands are a well-studied area of research. An early version of this paper was presented at the IEEE INFOCOM 2006, Weighted-fair queuing (WFQ) [8] and its variants such as Barcelona, Spain, 2006. This work was supported by the center for Broadband deficit round robin (DRR) [9] are well known techniques to OFDM Mobile Access (BrOMA) at POSTECH through the ITRC program achieve fair and efficient resource allocation. However, the of the Korean MIC, supervised by IITA. (IITA-2006-C1090-0603-0037). Minsu Shin and Song Chong are with the School of Electrical Engineering solutions are not scalable and implementing them in a high- and Computer Science, Korea Advanced Institute of Science and Technol- speed router with many flows is difficult since they need to ogy (KAIST), Daejeon 305-701, Korea (email: msshin@netsys.kaist.ac.kr; maintain per-flow queues and states. Another extreme is to song@ee.kaist.ac.kr). Injong Rhee is with the Department of Computer Science, North Carolina State University, Raleigh, NC 27695, USA (email: have routers maintain simpler queue management schemes rhee@csc.ncsu.edu). such as RED [10], REM [11] or PI [12]. Our study finds that
  • 2. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 2 these solutions may yield extremely unfair allocation of CPU These constraints are called dual-resource constraints and a and bandwidth and sometimes lead to very inefficient resource nonnegative rate vector r = [r1 , · · · , rS ]T satisfying these usages. dual constraints for all CPUs k ∈ K and all links l ∈ L is Some fair queueing algorithms such as Core-Stateless Fair said to be feasible. Queueing (CSFQ) [13] and Rainbow Fair Queueing (RFQ) [14] have been proposed to eliminate the problem of main- A1: We assume that each CPU k ∈ K knows the processing k taining per-flow queues and states in routers. However, those densities ws ’s for all the flows s ∈ S(k). schemes are concerned about bandwidth sharing only and do not consider joint allocation of bandwidth and CPU cycles. This assumption is reasonable because a majority of Internet Estimation-based Fair Queueing (EFQ) [15] and Prediction applications are known and their processing requirements can Based Fair Queueing (PBFQ) [16] have been also proposed be measured either off-line or on-line as discussed below. In for fair CPU sharing but they require per-flow queues and do practice, network flows could be readily classified into a small not consider joint allocation of bandwidth and CPU cycles number of application types [15], [17]–[19]. That is, there either. is a finite set of application types, a flow is an instance of Our study, to the best of our knowledge, is the first in an application type, and flows will have different processing examining the issues of TCP and AQM under the dual- densities only if they belong to different application types. resource environment and we show that by simulation DRQ In [17], applications have been divided into two categories: achieves fair and efficient resource allocation without imposing header-processing applications and payload-processing appli- much implementation cost. The remainder of this paper is cations, and each category has been further divided into a organized as follows. In Section II, we define the problem and set of benchmark applications. In particular, authors in [15] fairness in the dual-resource environment, in Sections III and experimentally measure the per-packet processing times for IV, we present DRQ and its simulation study, and in Section several benchmark applications such as encryption, compres- V, we conclude our paper. sion, and forward error correction. The measurement results find the network processing workloads to be highly regular and II. P RELIMINARIES : N ETWORK M ODEL AND predictable. Based on the results, they propose an empirical D UAL - RESOURCE P ROPORTIONAL FAIRNESS model for the per-packet processing time of these applications for a given processing platform. Interestingly, it is a simple A. Network model affine function of packet size M , i.e., µk +νa M where µk and a k a We consider a network that consists of a set of unidirectional k νa are the parameters specific to each benchmark application a links, L = {1, · · · , L}, and a set of CPUs, K = {1, · · · , K}. for a given processing platform k. Thus, the processing density The transmission capacity (or bandwidth) of link l is Bl (in cycles/bit) of a packet of size M from application a at (bits/sec) and the processing capacity of CPU k is Ck (cy- µk k platform k can be modelled as M +νa . Therefore, the average a cles/sec). These network resources are shared by a set of flows k processing density wa of application a at platform k can be (or data sources), S = {1, · · · , S}. Each flow s is associated computed upon arrival of a packet using an exponentially with its data rate rs (bits/sec) and its end-to-end route (or weighted moving average (EWMA) filter: path) which is defined by a set of links, L(s) ⊂ L, and a set of CPUs, K(s) ⊂ K, that flow s travels through. Let k k µk a k wa ← (1 − λ)wa + λ( + νa ), 0 < λ < 1. (1) S(l) = {s ∈ S|l ∈ L(s)} be the set of flows that travel M through link l and let S(k) = {s ∈ S|k ∈ K(s)} be the set µk k One could also directly measure the quantity M +νa in Eq. a of flows that travel through CPU k. Note that this model is (1) as a whole instead of relying on the empirical model by general enough to include various types of router architecture counting the number of CPU cycles actually consumed by a and network element with multiple CPUs and transmission packet while the packet is being processed. Lastly, determining links. the application type an arriving packet belongs to is an easy Flows can have different CPU demands. We represent this k task in many commercial routers today since L3/L4 packet notion by processing density ws , k ∈ K, of each flow s, which classification is a default functionality. is defined to be the average number of CPU cycles required k per bit when flow s is processed by CPU k. ws depends on k since different processing platforms (CPU, OS, and software) B. Proportional fairness in the dual-resource environment would require a different number of CPU cycles to process Fairness and efficiency are two main objectives in re- the same flow s. The processing demand of flow s at CPU k source allocation. The notion of fairness and efficiency has k is then ws rs (cycles/sec). been extensively studied and well understood with respect Since there are limits on CPU and bandwidth capacities, to bandwidth sharing. In particular, proportionally fair (PF) the amount of processing and bandwidth usage by all rate allocation has been considered as the bandwidth sharing flows sharing these resources must be less than or equal strategy that can provide a good balance between fairness and to the capacities at anytime. We represent this notion efficiency [20], [21]. by the following two constraints: for each CPU k ∈ K, In our recent work [7], we extended the notion of pro- k s∈S(k) ws rs ≤ Ck (processing constraint) and for each portional fairness to the dual-resource environment where link l ∈ L, s∈S(l) rs ≤ Bl (bandwidth constraint). processing and bandwidth resources are jointly constrained.
  • 3. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 3 flows 1.25 1.25 Link queue Processing- Jointly- Bandwidth- Processing- Jointly- Bandwidth- s∈S CPU queue Normalized CPU usage Normalized throughput limited limited limited limited limited limited 1.00 1.00 rs (bits/sec) ∑ rs B ws (cycles/bit) r1 B ∑ws rs C CPU Link 0.75 r2 B 0.75 w1 r1 C r3 B w2 r2 C C (cycles/sec) B (bits/sec) r4 B w3 r3 C 0.50 0.50 w4 r4 C Fig. 1. Single-CPU and single-link network 0.25 0.25 0 1.5 2 4.5 5 0 1.5 2 4.5 5 0 0.5 1 2.5 3 3.5 4 0 0.5 1 2.5 3 3.5 4 wh wh C B wa C B wa In the following, we present this notion and its potential (a) (b) advantages for the dual-resource environment to define our 1.25 1.25 goal for our main study of this paper on TCP/AQM. Processing- Jointly- Bandwidth- Processing- Jointly- Bandwidth- Normalized CPU usage Normalized throughput limited limited limited limited limited limited 1.00 1.00 Consider an aggregate log utility maximization problem (P) ∑ rs r1 B B ∑ws rs C r2 with dual constraints: B w1 r1 C 0.75 0.75 r3 B w2 r2 C r4 B 0.50 0.50 w3 r3 C P: max αs log rs (2) w4 r4 C r 0.25 0.25 s∈S 0 1.5 2 4.5 5 0 1.5 2 4.5 5 0 0.5 1 2.5 3 3.5 4 0 0.5 1 2.5 3 3.5 4 wh wh subject to k C B wa C B wa s∈S(k) ws rs ≤ Ck , ∀k∈K (3) (c) (d) s∈S(l) rs ≤ Bl , ∀ l ∈ L (4) rs ≥ 0, ∀ s ∈ S (5) Fig. 2. Fairness and efficiency in the dual-resource environment (single- CPU and single-link network): (a) and (b) respectively show the normalized where αs is the weight (or willingness to pay) of flow bandwidth and CPU allocations enforced by PF rate allocation, and (c) and (d) respectively show the normalized bandwidth and CPU allocations enforced by s. The solution r∗ of this problem is unique since it is TCP-like rate allocation. When C/B < wa , TCP-like rate allocation gives ¯ a strictly concave maximization problem over a convex lower bandwidth utilization than PF rate allocation (shown in (a) and (c)) and has an unfair allocation of CPU cycles (shown in (d)). set [22]. Furthermore, r∗ is weighted proportionally fair since ∗ rs −rs s∈S αs rs ∗ ≤ 0 holds for all feasible rate vectors r by the optimality condition of the problem. We define this allocation to be (dual-resource) PF rate allocation. Note that • Bandwidth(BW)-limited case (θ∗ = 0 and π ∗ > 0): rs = ∗ αs ∗ ∗ this allocation can be different from Kelly’s PF allocation [20] π∗ , ∀s ∈ S, s∈S ws rs ≤ C and s∈S rs = B. From C since the set of feasible rate vectors can be different from that these, we know that this case occurs when B ≥ wa and ¯ ∗ αs B of Kelly’s formulation due to the extra processing constraint PF rate allocation becomes rs = , ∀s ∈ S. s∈S αs (3). • Jointly-limited case (θ∗ > 0 and π ∗ > 0): This case From the duality theory [22], r∗ satisfies that occurs when wh < B < wa . By plugging rs = ws θαs ∗ , ¯ C ¯ ∗ ∗ +π ∗ ∗ αs ∀s ∈ S, into s∈S ws rs = C and s∈S rs = B, we ∗ rs = k θ∗ + ∗, ∀ s ∈ S, (6) can obtain θ∗ , π ∗ and consequently rs , ∀s ∈ S. ∗ k∈K(s) ws k l∈L(s) πl We can apply other increasing and concave utility func- where θ∗ =[θ1 , · · · , θK ]T and π ∗ =[π1 , · · · , πL ]T are Lagrange ∗ ∗ ∗ ∗ tions (including the one from TCP itself [23]) in the dual- ∗ multiplier vectors for Eqs. (3) and (4), respectively, and θk resource problem in Eqs. (2)-(5). The reason why we give ∗ and πl can be interpreted as congestion prices of CPU k a special attention to proportional fairness by choosing log and link l, respectively. Eq. (6) reveals an interesting property utility function is that it automatically yields weighted fair that the PF rate of each flow is inversely proportional to the CPU sharing (ws rs = αs Cαs , ∀s ∈ S) if CPU is limited, ∗ aggregate congestion price of its route with the contribution s∈S ∗ k ∗ and weighted fair bandwidth sharing (rs = αs Bαs , ∀s ∈ S) ∗ of each θk being weighted by ws . The congestion price θk or s∈S ∗ if bandwidth is limited, as illustrated in the example of Figure πl is positive only when the corresponding resource becomes 1. This property is obviously what is desirable and a direct a bottleneck, and is zero, otherwise. consequence of the particular form of rate-price relationship To illustrate the characteristics of PF rate allocation in given in Eq. (6). Thus, this property is not achievable when the dual-resource environment, let us consider a limited case other utility functions are used. where there are only one CPU and one link in the network, as Figures 2 (a) and (b) illustrate the bandwidth and CPU shown in Figure 1. For now, we drop k and l in the notation allocations enforced by PF rate allocation in the single-CPU for simplicity. Let wa and wh be the weighted arithmetic and ¯ ¯ and single-link case using an example of four flows with harmonic means of the processing densities of flows sharing ws αs identical weights (αs =1, ∀s) and different processing densities the CPU and link, respectively. So, wa = ¯ s∈S αs −1 s∈S (w1 , w2 , w3 , w4 ) = (1, 2, 4, 8) where wh =2.13 and wa =3.75. ¯ ¯ and wh = ¯ s∈S ws s∈S αs αs . There exist three cases as For comparison, we also consider a rate allocation in which below. flows with an identical end-to-end path get an equal share of ∗ ∗ ∗ α the maximally achievable throughput of the path and call it • CPU-limited case (θ > 0 and π = 0): rs = w s ∗ , sθ TCP-like rate allocation. That is, if TCP flows run on the ∗ ∗ ∀s ∈ S, s∈S ws rs = C and s∈S rs ≤ B. From C example network in Figure 1 with ordinary AQM schemes these, we know that this case occurs when B ≤ wh and ¯ ∗ αs C such as RED on both CPU and link queues, they would have PF rate allocation becomes rs = ws αs , ∀s ∈ S. s∈S
  • 4. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 4 the same long-term throughput. Thus, in our example, TCP- A2: We assume that each TCP flow s has a constant RTT like rate allocation is defined to be the maximum equal rate τs , as customary in the fluid modeling of TCP dynamics vector satisfying the dual constraints, which is rs = B , ∀s, S [23]–[28]. if B ≥ wa , and rs = wCS , ∀s, otherwise. The bandwidth C ¯ ¯a and CPU allocations enforced by TCP-like rate allocation are Let yl (t) be the average queue length at link l at time t, shown in Figures 2 (c) and (d). measured in bits. Then, From Figure 2, we observe that TCP-like rate allocation s∈S(l) xs (t − τsl ) − Bl yl (t) > 0 yields far less aggregate throughput than PF rate allocation yl (t) = ˙ + (7) when C/B < wa , i.e., in both CPU-limited and jointly- ¯ s∈S(l) xs (t − τsl ) − Bl yl (t) = 0. limited cases. Intuitively, this is because TCP-like allocation Similarly, let zk (t) be the average queue length at CPU k at which finds an equal rate allocation yields unfair sharing time t, measured in CPU cycles. Then, of CPU cycles as CPU becomes a bottleneck (see Figure 2 k ws xs (t − τsk ) − Ck zk (t) > 0 (d)), which causes the severe aggregate throughput drop. In s∈S(k) zk (t) = ˙ k + contrast, PF allocation yields equal sharing of CPU cycles, s∈S(k) ws xs (t − τsk ) − Ck zk (t) = 0. i.e., ws rs become equal for all s ∈ S, as CPU becomes a (8) bottleneck (see Figure 2 (b)), which mitigates the aggregate Let ps (t) be the end-to-end marking (or loss) probability throughput drop. This problem in TCP-like allocation would at time t to which TCP source s reacts. Then, the rate- get more severe when the processing densities of flows have adaptation dynamics of TCP Reno or its variants, particularly a more skewed distribution. in the timescale of tens (or hundreds) of RTTs, can be readily In summary, in a single-CPU and single-link network, described by [23] PF rate allocation achieves equal bandwidth sharing when  2  Ms (1−p2 (t)) − 2 xs (t)ps (t) s xs (t) > 0 bandwidth is a bottleneck, equal CPU sharing when CPU is Ns τs 3 Ns Ms xs (t) = ˙ 2 + (9) a bottleneck, and a good balance between equal bandwidth  Ms (1−ps (t)) 2 xs (t)ps (t) − 3 Ns Ms xs (t) = 0 Ns τ 2 sharing and equal CPU sharing when bandwidth and CPU s form a joint bottleneck. Moreover, in comparison to TCP- where Ms is the average packet size in bits of TCP flow like rate allocation, such consideration of CPU fairness in PF s and Ns is the number of consecutive data packets that rate allocation can increase aggregate throughput significantly are acknowledged by an ACK packet in TCP flow s (Ns is when CPU forms a bottleneck either alone or jointly with typically 2). bandwidth. In DRQ, we employ one RED queue per one resource. Each RED queue computes a probability (we refer to it as pre- III. M AIN R ESULT: S CALABLE TCP/AQM A LGORITHM marking probability) in the same way as an ordinary RED queue computes its marking probability. In this section, we present a scalable AQM scheme, called That is, the RED queue at link l computes a pre-marking Dual-Resource Queue (DRQ), that can approximately imple- probability ρl (t) at time t by ment dual-resource PF rate allocation described in Section II  for TCP-Reno flows. DRQ modifies RED [10] to achieve PF  0  yl (t) ≤ bl ˆ  ml  allocation without incurring per-flow operations (queueing or bl −bl (ˆl (t) − bl ) y bl ≤ yl (t) ≤ bl ˆ ρl (t) = 1−ml (10) state management). DRQ does not require any change in TCP  b (ˆl (t) − bl ) + ml bl ≤ yl (t) ≤ 2bl  y ˆ   l stacks. 1 yl (t) ≥ 2bl ˆ ˙ loge (1 − λl ) loge (1 − λl ) A. DRQ objective and optimality yl (t) = ˆ yl (t) − ˆ yl (t) (11) ηl ηl We describe a TCP/AQM network using the fluid model as where ml ∈(0, 1], 0 ≤ bl < bl and Eq. (11) is the continuous- in the literature [23]-[28]. In the fluid model, the dynamics time representation of the EWMA filter [25] used by the RED, whose timescale is shorter than several tens (or hundreds) of i.e., round-trip times (RTTs) are neglected. Instead, it is convenient to study the longer timescale dynamics and so adequate to yl ((k +1)ηl ) = (1−λl )ˆl (kηl )+λl yl (kηl ), λl ∈ (0, 1). (12) ˆ y model the macroscopic dynamics of long-lived TCP flows that Eq. (11) does not model the case where the averaging we are concerning. timescale of the EWMA filter is smaller than the averaging Let xs (t) (bits/sec) be the average data rate of TCP timescale ∆ on which yl (t) is defined. In this case, Eq. (11) source s at time t where the average is taken over the must be replaced by yl (t) = yl (t). ˆ time interval ∆ (seconds) and ∆ is assumed to be on the Similarly, the RED queue at CPU k computes a pre-marking order of tens (or hundreds) of RTTs, i.e., large enough to probability σk (t) at time t by average out the additive-increase and multiplicative decrease  (AIMD) oscillation of TCP. Define the RTT τs of source s  0  m vk (t) ≤ bk ˆ   by τs = τsi + τis where τsi denotes forward-path delay from k bk −bk (ˆk (t) − bk ) v bk ≤ vk (t) ≤ bk ˆ σk (t) = 1−mk source s to resource i and τis denotes backward-path delay  b (ˆk (t) − bk ) + mk bk ≤ vk (t) ≤ 2bk  v ˆ   k from resource i to source s. 1 vk (t) ≥ 2bk ˆ (13)
  • 5. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 5 ˙ loge (1 − λk ) loge (1 − λk ) In the current Internet environment, however, these condi- vk (t) = ˆ vk (t) − ˆ vk (t) (14) ηk ηk tions will hardly be violated particularly as the bandwidth- where vk (t) is the translation of zk (t) in bits. delay products of flows increase. By applying C1 and C2 to Given these pre-marking probabilities, the objective of DRQ the Lagrangian optimality condition of Problem P in Eq. (6) √ 3/2Ms is to mark (or discard) packets in such a way that the end-to- with αs = , we have τs end marking (or loss) probability ps (t) seen by each TCP flow ∗ 3/2 s at time t becomes rs τs = k ∗ ∗ (18) k 2 Ms k∈K(s) ws θk + l∈L(s) πl k∈K(s) ws σk (t − τks ) + l∈L(s) ρl (t − τls ) ps (t) = 2. 3/2 k > (19) 1+ k∈K(s) ws σk (t − τks ) + l∈L(s) ρl (t − τls ) k∈K(s) k ws + |L(s)| (15) r∗ τ The actual marking scheme that can closely approximate this where Mss is the bandwidth-delay product (or window size) of s objective function will be given in Section III-B. flow s, measured in packets. The maximum packet size in the The Reno/DRQ network model given by Eqs. (7)-(15) is Internet is Ms = 1, 536 bytes (i.e., maximum Ethernet packet called average Reno/DRQ network as the model describes the size). Flows that have the minimum processing density are IP interaction between DRQ and Reno dynamics in long-term forwarding applications with maximum packet size [17]. For average rates rather than explicitly capturing instantaneous instance, a measurement study in [15] showed that per-packet TCP rates in the AIMD form. This average network model processing time required for NetBSD radix-tree routing table enables us to study fixed-valued equilibrium and consequently lookup on a Pentium 167 MHz processor is 51 µs (for a establish in an average sense the equilibrium equivalence of a faster CPU, the processing time reduces; so as what matters Reno/DRQ network and a network with the same configuration is the number of cycles per bit, this estimate applies to the but under dual-resource PF congestion control. other CPUs). Thus, the processing density for this application k Let x = [x1 , · · · , xS ]T , σ = [σ1 , · · · , σK ]T , flow is about ws =51(µsec)x167(MHz)/1,536(bytes)=0.69 ρ = [ρ1 , · · · , ρL ] , p = [p1 , · · · , pS ] , y = [y1 , · · · , yL ]T , T T (cycles/bit). Therefore, from Eq. (19), ∗the worst-case lower rs τ z = [z1 , · · · , zK ]T , v = [v1 , · · · , vK ]T , y = [ˆ1 , · · · , yL ]T ˆ y ˆ bound on the window size becomes Mss > 1.77 (packets), and v = [ˆ1 , · · · , vK ]T . ˆ v ˆ which occurs when the flow traverses a CPU only in the path (i.e., |K(s)| = 1 and |L(s)| = 0) . This concludes that Proposition 1: Consider an average Reno/DRQ network the conditions C1 and C2 will never be violated as long as given by Eqs. (7)-(15) and formulate the corresponding ag- the steady-state average TCP window size is sustainable at a gregate log utility maximization problem (Problem P) as in √ value greater than or equal to 2 packets, even in the worst case. s 3/2M Eqs. (2)-(5) with αs = τs . If the Lagrange multiplier ∗ ∗ vectors, θ and π , of this corresponding Problem P satisfy the following conditions: B. DRQ implementation C1 : ∗ θk < 1, ∀k ∈ K(s), ∀s ∈ S, (16) In this section, we present a simple scalable packet marking ∗ (or discarding) scheme that closely approximates the DRQ C2 : πl < 1, ∀l ∈ L(s), ∀s ∈ S, (17) objective function we laid out in Eq. (15). then, the average Reno/DRQ network has a unique equilibrium point (x∗ , σ ∗ , ρ∗ , p∗ , y ∗ , z ∗ , v ∗ , y ∗ , v ∗ ) and (x∗ , σ ∗ , ρ∗ ) is ˆ ˆ A3: We assume that for all times the primal-dual optimal solution of the corresponding Problem  2 ∗ ∗ ∗ P. In addition, vk > bk if σk > 0 and 0 ≤ vk ≤ bk otherwise,  ∗ ∗ ∗ k ws σk (t − τks ) + ρl (t − τls ) 1, ∀ s ∈ S. and yl > bl if ρl > 0 and 0 ≤ yl ≤ bl otherwise, for all k∈K(s) l∈L(s) k ∈ K and l ∈ L. (20) Proof: The proof is given in Appendix. This assumption implies that (ws σk (t))2 k 1, ∀k ∈ K, Proposition 1 implies that once the Reno/DRQ network ρl (t) 2 k 1, ∀l ∈ L, and any product of ws σk (t) and reaches its steady state (i.e., equilibrium), the average data ρl (t) is also much smaller than 1. Note that our analysis rates of Reno sources satisfy weighted proportional fairness √ is based on long-term average values of σk (t) and ρl (t). 3/2Ms with weights αs = τs . In addition, if a CPU k is a The typical operating points of TCP in the Internet during ∗ bottleneck (i.e., σk > 0), its average equilibrium queue length steady state where TCP shows a reasonable performance are ∗ vk stays at a constant value greater than bk , and if not, it stays under low end-to-end loss probabilities (less than 1%) [29]. at a constant value between 0 and bk . The same is true for Since the end-to-end average probabilities are low, the marking link congestion. probabilities at individual links and CPUs can be much lower. The existence and uniqueness of such an equilibrium point Let R be the set of all the resources (including CPUs and in the Reno/DRQ network is guaranteed if conditions C1 links) in the network. Also, for each flow s, let R(s) = and C2 hold in the corresponding Problem P. Otherwise, the {1, · · · , |R(s)|} ⊂ R be the set of all the resources that it Reno/DRQ networks do not have an equilibrium point. traverses along its path and let i ∈ R(s) denote the i-th
  • 6. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 6 i in R(s). Then, the proposed ECN marking scheme can be expressed by the following recursion. For i = 1, 2, · · · , |R(s)|, When a packet arrives at resource i at time t: i i−1 i−1 i−1 if (ECN = 11) P11 = P11 + (1 − P11 )δi + P10 (1 − δi ) i (24) set ECN to 11 with probability δi (t); = 1 − (1 − δi )(1 − i−1 P11 i−1 − P10 i ), (25) if (ECN == 00) i P10 i−1 = P10 (1 − δi )(1 − i−1 i ) + P00 (1 − δi ) i , (26) set ECN to 10 with probability i (t); else if (ECN == 10) i P00 = pi−1 (1 − δi )(1 − i ) 00 (27) set ECN to 11 with probability i (t); 0 0 0 with the initial condition that P00 = 1, P10 = 0, P11 = 0. Evolving i from 0 to |R(s)|, we obtain   |R(s)| |R(s)| i−1 Fig. 3. DRQ’s ECN marking algorithm |R(s)| P11 = 1− (1 − δi ) 1 − i i + Θ (28) i=1 i=2 i =1 resource along its path and indicate whether it is a CPU or a where Θ is the higher-order terms (order ≥ 3) of i ’s. By link. Then, some manipulation after applying Assumption A3 Assumption A3, we have   to Eq. (15) gives |R(s)| |R(s)| i−1  2 |R(s)| P11 ≈ 1− (1 − δi ) 1 − i i (29) i=1 i=2 i =1 ps (t) ≈  k ws σk (t − τks ) + ρl (t − τls ) |R(s)| |R(s)| i−1 k∈K(s) l∈L(s) (21) ≈ δi + (30) |R(s)| |R(s)| i−1 i i i=1 i=2 i =1 = δi (t − τis ) + i (t − τis ) i (t − τi s ) i=1 i=2 i =1 which concludes that the proposed ECN marking scheme approximately implements the DRQ objective function in Eq. where |R(s)| (21) since P11 = ps . (ws σi (t))2 i if i indicates CPU Disclaimer: DRQ requires alternative semantics for the δi (t) = (22) ρi (t)2 if i indicates link ECN field in the IP header, which are different from the default and semantics defined in RFC 3168 [31]. What we have shown √ i here is that DRQ can be implemented using two-bit signaling i (t) = √2ws σi (t) if i indicates CPU (23) such as ECN. The coexistence of the default semantics and the 2ρi (t) if i indicates link. alternative semantics required by DRQ needs further study. Eq. (21) tells that each resource i ∈ R(s) (except the first resource in R(s), i.e., i=1) contributes to ps (t) with C. DRQ stability i−1 two quantities, δi (t − τis ) and i =1 i (t − τis ) i (t − τi s ). In this section, we explore the stability of Reno/DRQ Moreover, resource i can compute the former using its own networks. Unfortunately, analyzing its global stability is an ex- congestion information, i.e., σi (t) if it is a CPU or ρi (t) tremely difficult task since the dynamics involved are nonlinear if it is a link, whereas it cannot compute the latter without and retarded. Here, we present a partial result concerning local knowing the congestion information of its upstream resources stability, i.e., stability around the equilibrium point. on its path (∀ l < l). That is, the latter requires an inter- Define |R|x|S| matrix Γ(z) whose (i, s) element is given resource signaling to exchange the congestion information. by For this reason, we refer to δi (t) as intra-resource marking  i −zτ  ws e is if s ∈ S(i) and i indicates CPU probability of resource i at time t and i (t) as inter-resource Γis (z) = e−zτis if s ∈ S(i) and i indicates link marking probability of resource i at time t. We solve this intra-  0 otherwise. and inter-resource marking problem using two-bit ECN flags (31) without explicit communication between resources. Proposition 2: An average Reno/DRQ network is locally Consider the two-bit ECN field in the IP header [30]. stable if we choose the RED parameters in DRQ such Among the four possible values of ECN bits, we use three val- that max{ b mk , 1−mk }Ck ∈ (0, ψ), ∀k ∈ K, and −b b k k ues to indicate three cases: initial state (ECN=00), signaling- k max{ b ml , 1−ml }Bl ∈ (0, ψ), ∀l ∈ L, and marked (ECN=10) and congestion-marked (ECN=11). When −b l l b l a packet is congestion-marked (ECN=11), the packet is either 3/2φ2 [Γ(0)] min marked (if TCP supports ECN) or discarded (if not). DRQ sets ψ≤ (32) |R|Λ2 τmax wmax max 3 the ECN bits as shown in Figure 3. Below, we verify that the ECN marking scheme in Figure 3 max{Ck } where Λmax = max{ min{wk } min{Ms } , min{Ms} }, τmax = max{Bl } s approximately implements the objective function in Eq. (21). k max{τs }, wmax = max{ws , 1} and φmin [Γ(0)] denotes the Consider a flow s with path R(s). For now, we drop the time smallest singular values of the matrix Γ(z) evaluated at z = 0. i i i index t to simplify the notation. Let P00 , P10 , P11 respectively Proof: The proof is given in Appendix and it is a denote the probabilities that packets of flow s will have straightforward application of the TCP/RED stability result in ECN=00, ECN=10, ECN=11, upon departure from resource [32].
  • 7. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 7 w=0.25 1 5ms 5ms 1 IV. P ERFORMANCE w=0.50 2 L1 A. Simulation setup w=1.00 3 R1 R2 40Mbps In this section, we use simulation to verify the performance w=2.00 4 10ms 4 of DRQ in the dual-resource environment with TCP Reno : 10 TCP sources : TCP sink sources. We compare the performance of DRQ with that of the two other AQM schemes that we discussed in the introduction. Fig. 4. Single link scenario in dumbell topology One scheme is to use the simplest approach where both CPU and link queues use RED and the other is to use DRR (a 2.2 Average throughput (Mbps) 2.0 Processing-limited Jointly-limited Bandwidth-limited variant of WFQ) to schedule CPU usage among competing 1.8 flows according to the processing density of each flow. DRR 1.6 1.4 maintains per flow queues, and equalizes the CPU usage in a 1.2 round robin fashion when the processing demand is higher 1.0 0.8 than the CPU capacity (i.e., CPU-limited). In some sense, 0.6 SG1, w = 0.25 SG2, w = 0.50 these choices of AQM are two extreme; one is simple, but 0.4 0.2 SG3, w = 1.00 SG4, w = 2.00 less fair in use of CPU as RED is oblivious to differing CPU 0 10 20 30 40 50 demands of flows and the other is complex, but fair in use CPU capacity (Mcycles/sec) of CPU as DRR installs equal shares of CPU among these (a) RED-RED flows. Our goal is to demonstrate through simulation that DRQ 2.2 Average throughput (Mbps) using two FIFO queues always offers provable fairness and 2.0 Processing-limited Jointly-limited Bandwidth-limited 1.8 efficiency, which is defined as the dual-resource PF allocation. 1.6 Note that all three schemes use RED for link queues, but DRQ 1.4 1.2 uses its own marking algorithm for link queues as shown in 1.0 Figure 3 which uses the marking probability obtained from the 0.8 0.6 SG1, w = 0.25 underlying RED queue for link queues. We call the scheme 0.4 SG2, w = 0.50 SG3, w = 1.00 with DRR for CPU queues and RED for link queues, DRR- 0.2 0 SG4, w = 2.00 RED, the scheme with RED for CPU queues and RED for 10 20 30 40 50 CPU capacity (Mcycles/sec) link queues, RED-RED. The simulation is performed in the NS-2 [33] environment. (b) DRR-RED We modified NS-2 to emulate the CPU capacity by simply 2.2 Average throughput (Mbps) 2.0 Processing-limited Jointly-limited Bandwidth-limited holding a packet for its processing time duration. In the 1.8 simulation, TCP-NewReno sources are used at end hosts and 1.6 1.4 RED queues are implemented using its default setting for the 1.2 1.0 “gentle” RED mode [34] (mi = 0.1, bi = 50 pkts, bi = 550 0.8 pkts and λi = 10−4 . The packet size is fixed at 500 Bytes). 0.6 SG1, w = 0.25 SG2, w = 0.50 0.4 The same RED setting is used for the link queues of DRR- 0.2 SG3, w = 1.00 SG4, w = 2.00 RED and RED-RED, and also for both CPU and link queues 0 10 20 30 40 50 of DRQ (DRQ uses a function of the marking probabilities CPU capacity (Mcycles/sec) to mark or drop packets for both queues). In our analytical (c) DRQ model, we separate CPU and link. To simplify the simulation Fig. 5. Average throughput of four different classes of long-lived TCP flows setup and its description, when we refer to a “link” for the in the dumbell topology. Each class has a different CPU demand per bit (w). simulation setup, we assume that each link l consists of one No other background traffic is added. The Dotted lines indicate the ideal PF CPU and one Tx link (i.e., bandwidth). rate allocation for each class. In the figure, we find that DRQ and DRR-RED show good fairness under the CPU-limited region while RED-RED does not. By adjusting CPU capacity Cl , link bandwidth Bl , and the Vertical bars indicate 95% confidence intervals. amount of background traffic, we can control the bottleneck conditions. Our simulation topologies are chosen from a vari- ous set of Internet topologies from simple dumbell topologies region to the BW-limited region. Four classes of long-lived to more complex WAN topologies. Below we discuss these se- TCP flows are added for simulation whose processing densities tups and simulation scenarios in detail and their corresponding are 0.25, 0.5, 1.0 and 2.0 respectively. We simulate ten TCP results for the three schemes we discussed above. Reno flows for each class. All the flows have the same RTT of 40 ms. B. Dumbell with long-lived TCP flows In presenting our results, we take the average throughput To confirm our analysis in Section II-B, we run a single link of TCP flows that belong to the same class. Figure 5 plots bottleneck case. Figure 4 shows an instance of the dumbell the average throughput of each class. To see whether DRQ topology commonly used in congestion control research. We achieves PF allocation, we also plot the ideal proportional fair fix the bandwidth of the bottleneck link to 40 Mbps and vary rate for each class (which is shown in a dotted line). As shown its CPU capacity from 5 Mcycles/s to 55 Mcycles/s. This in Figure 5(a), where we use typical RED schemes at both variation allows the bottleneck to move from the CPU-limited queues, all TCP flows achieve the same throughput regardless
  • 8. IEEE/ACM TRAS. ON NETWORKING, VOL. 6, NO. 1, JUNE 2008 8 1 0.8 0.9 Bandwidth utilization 0.7 Normalized CPU sharing of 0.8 0.6 high processing flows 0.7 0.5 0.6 0.4 0.5 0.3 0.4 DRQ 0.2 0.3 DRR-RED DRQ 0.2 RED-RED 0.1 DRR-RED RED-RED 0.1 0 10 20 30 40 50 1 2 3 4 5 6 7 8 9 10 CPU capacity (Mcycles/sec) Number of high processing flows (a) CPU sharing Fig. 6. Comparison of bandwidth utilization in the Dumbbell single 40 Total throughput (Mbps) bottleneck topology. RED-RED achieves far less bandwidth utilization than DRR-RED and DRQ when CPU becomes a bottleneck. 35 30 25 of the CPU capacity of the link and their processing densities. 20 Figures 5(b) and (c) show that the average throughput curves DRQ of DRR-RED and DRQ follow the ideal PF rates reasonably 15 DRR-RED RED-RED well. When CPU is only a bottleneck resource, the PF rate of 10 1 2 3 4 5 6 7 8 9 10 each flow must be inversely proportional to its processing den- Number of high processing flows sity ws , in order to share CPU equally. Under the BW-limited (b) Total throughput region, the proportionally-fair rate of each flow is identical to the equal share of the bandwidth. Under the jointly-limited Fig. 7. Impact of high processing flows. As the number of high processing flows increase, the network becomes more CPU-bound. Under RED-RED, region, flows maintain the PF rates while fully utilizing both these flows can dominate the use of CPU, reaching about 80% CPU usage resources. Although DRQ does not employ the per-flow queue with only 10 flows, starving 40 competing, but low processing flows. structure as DRR, its performance is comparable to that of DRR-RED. Figure 6 shows that the aggregate throughput achieved link is in the jointly-limited region which is the reason why by each scheme. It shows that RED-RED has much lower the CPU share of high-processing flows go beyond 20%. bandwidth utilization than the two other schemes. This is be- cause, as discussed in Section II-B, when CPU is a bottleneck D. Dumbell with background Internet traffic resource, the equilibrium operating points of TCP flows over No Internet links are without cross traffic. In order to the RED CPU queue that achieve the equal bandwidth usage emulate more realistic Internet environments, we add cross while keeping the total CPU usage below the CPU capacity traffic modelled from various observations on RTT distribu- are much lower than those of the other schemes that need to tion [35], flow sizes [36] and flow arrival [37]. As modelling ensure the equal sharing of CPU (not the bandwidth) under the Internet traffic in itself is a topic of research, we do not the CPU-limited region. dwell on which model is more realistic. In this paper, we present one model that contains the statistical characteristics C. Impact of flows with high processing demands that are commonly assumed or confirmed by researchers. In the introduction, we indicated that RED-RED can cause These characteristics include that the distribution of flow extreme unfairness in use of resources. To show this by sizes has a long-range dependency [36], [38], the RTTs of experiment, we construct a simulation run where we fix the flows is rather exponentially distributed [39] and the arrivals CPU capacity to 40 Mcycles/s and add an increasing number of flows are exponentially distributed [37]. Following these of flows with a high CPU demand (ws = 10) in the same setup characteristics, our cross traffic consists of a number of short as the dumbell sink bottleneck environment in Section IV-B. and medium-lived TCP flows that follow a Poisson arrival We call these flows high processing flows. From Figure 5, at process and send a random number of packets derived from 40 Mcycles/s, when no high processing flows are added, CPU a hybrid distribution of Lognormal (body) and Pareto (tail) is not a bottleneck. But as the number of high processing distributions with cutoff 133KB (55% of packets are from flows increases, the network moves into the CPU-limited flows larger than the cutoff size). We set the parameters of region. Figure 7 shows the results of this simulation run. In flow sizes identical to those from Internet traffic characteristics Figure 7 (a), as we increase the number of high processing in [36] (Lognormal:µ = 9.357, σ = 1.318, Pareto:α = 1.1), so flows, the aggregate CPU share of high processing flows that a continuous distribution of flow sizes is included in the deviates significantly from the equal CPU share; under a larger background traffic. Furthermore, we also generate reverse-path number of high processing flows (e.g., 10 flows), these flows traffic consisting of 10 long-lived TCP flows and a number of dominate the CPU usage over the other lower processing short and medium-lived flows to increase realism and also to density flows, driving them to starvation. In contrast, DRQ and reduce the phase effect. The RTT of each cross traffic flow DRR approximately implement the equal CPU sharing policy. is randomly selected from a range of 20 to 60 ms. We fix Even though the number of high processing flows increases, the bottleneck bandwidth and CPU capacities to 40 Mbps the bandwidth remains a bottleneck resource as before, so the and 40 Mcycles/s, respectively, and generate the same number