A fundamental problem before carriers today is to optimize network cost
and performance by better resource allocation to traffic demands. This is especially
important with the packet infrastructure becoming a critical business resource.
The key to achieving this is traffic engineering (TE), the process of
systematically putting traffic where there is capacity, and backbone
capacity management, the process of ensuring that there is enough network
capacity to meet demand, even at peak times and under failure conditions,
without significant queue buildups.
In this talk, we first focus on the TE techniques and approaches used
in the networks of two large carriers: Global Crossing and
Sprint, which represent the two ends of the traffic engineering spectrum.
We do so by presenting a snapshot of their TE philosophy, deployment strategy,
and network design principles and operation.
We then present the results of an empirical study of backbone traffic
characteristics that suggests that Internet traffic is not self-similar at
timescales relevant to QoS. Our non-parametric approach requires minimal
assumptions (unlike much of the previous work), and allows
us to formulate a practical process for ensuring QoS using backbone
capacity management.
(This latter work is joint with Thomas Telkamp, Global Crossing Ltd. and Arman
Maghbouleh, Cariden Technologies, Inc.)
10. Design Principles: Statistics Collection Statistics on individual LSPs can be used to build matrices Using packets, we do not know traffic individually to B & C
11. Design Principles: LSP Control & Management Manually move traffic away from potential congestion via ERO Adding new LSPs with a configured load splitting ratio
14. SprintLink TM IP Backbone Network 19+ countries 30+ major intl. cities 5 continents (reach S. America as well) 400+ POPs Courtesy: Jeff Chaltas Sprint Public Relations Represents connectivity only (not to scale) 110,000+ route miles (common with Sprint LD network)
22. A Peek at a Row of a Traffic Matrix Adapted from [Bhattacharya02] Summary of Data Collected Distribution of aggregate access traffic across other POPs in the Sprint backbone Peer 1 Peer 2 Web 2 Web 1 ISP To Backbone Sprint POP under study
23.
24.
25.
26.
27.
28. Actual Behavior of Streams in the Sprint Backbone Elephants retain a large share of the bandwidth & maintain their ordering Source [Bhattacharya02] Time of day variation of elephants & mice to a busy egress POP Elephants Mice Decreasing Traffic Volume Distribution of traffic from p8 streams of POP under study to 3 egress POPs Less than 10 of the largest streams account for up to 90% of the traffic
29.
30.
31.
32.
33.
34.
35.
36.
37.
38. Poisson versus Self-Similar Traffic Scale Invariant! Refs. [Liljenstolpe01], [Lothberg01] Ref. [Tekinay99] Markovian Process Self-Similar Process
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
Get up to 1TB of data per day per POP! Timestamp have 2us accuracy, header has 44 bytes.
Where does traffic come from or which sources/links/customers contribute to traffic and how much? POPs: What is the variaton of traffic per time of day? What is the distribution of traffic across aggregate flows? That is, what information on routing and traffic flow between POPs. Obtain information for traffic in both time and space. Matrix design: Is there a better way to spread the traffic across the paths between POPs? At what granularity should this be done. We look at this in the techniques lecture.
Transmit time through a router is critical, since it is Critical for delay-sensitive application Adds to e2e delay Is useful to control QoS
Observations: This histogram shows that the most common assumption that traffic from a source is uniformly distributed to all destinations does not match Internet behavior at all! This is because: Some POPs sink larger traffic than others – based simply on geography, based on where international trunks terminate, etc. The traffic distribution between POPs exhibits a significant degree of variation – the vol. Of traffic that an egress POP receives depends on the number and type of customers attached to the egress POP. Likewise, the amount of traffic an ingress POP generates depends on the no. and type of customers, access links, their speeds etc.
TE: If a new POP/link is added, can they predict where in the network they need to add new bandwidth? Conversely, where do they need an additional POP/link to tackle congestion or growing traffic demands? BGP peering: Are we carrying unwanted IP traffic? Are our peers’ announcements consistent with our BGP announcements? Intra-domain routing: verify load balancing? Design adaptive policies SLAs: Can use info. on how much traffic is exchanged between peers and how it varies to see what guarantees can be offered for delay, throughput, etc. Reports: Can use to generate reports for customers that verify that customer traffic is being correctly and consistently routed
Where does traffic come from or which sources/links/customers contribute to traffic and how much? POPs: What is the variaton of traffic per time of day? What is the distribution of traffic across aggregate flows? That is, what information on routing and traffic flow between POPs. Obtain information for traffic in both time and space. Matrix design: Is there a better way to spread the traffic across the paths between POPs? At what granularity should this be done. We look at this in the techniques lecture.
The better load balancing requires one to deviate from shortest path routing. Thus, need to ensure that significant delays are not introduced by this process. This is not likely because Backbone is highly meshes, so most alternate paths between an ingress-egress POP pair are only 1-2 hops longer than the shortest path. Average delay through routers is only a few ms, so additional delay due to a few extra hops will not be significant.
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
I’ll now highlight a few switching phenomena that one must contend with in both circuit and packet switching. The reason for considering them here is that all architectures are ultimately designed to overcome these phenomena. The first of these is output contention, which occurs when the sources transmit at rates whose aggregate exceeds the capacity of one or more outputs. Circuit and packet switches handle output contention differently. In circuit switching of course no new circuit can be setup on a link that is full. So the moment there is output contention, one must reject any new circuit. In packet switching, the contention handling differs depending on the nature of the contention. For example, short-term congestion can be tackled by buffering data and transmitting it a short while later when resources become available. Long-term or sustained congestion can be handled in one of three ways: dropping excess data (the question here is whom to drop), by applying admission control at the source (the question here is whom to throttle), or by using flow control and sending feedback to the source (the question here is whom to reduce and by how much). The sizing of the buffers at various points in a switch/router is critically related to the nature and type of contention the switch is designed to handle.
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.