SlideShare une entreprise Scribd logo
1  sur  8
Télécharger pour lire hors ligne
Juniper Networks
    MX Series 3D with Junos Trio Chipset
  Performance, Scalability and Power Efficiency Validation

Introduction
                                                                          EANTC Validation Highlights
In September 2009, EANTC was commissioned by
Juniper Networks to validate the density, scalability,                                       Density and Scale
services scale and power efficiency of the new 16-port                                       ➜ 1.44 Tbit/s (unidirectional)
10 Gigabit Ethernet (16x10GbE) MPC line card.                                                  throughput in 1/6 rack size
Additionally, the capabilities of the new Juniper                                            ➜ 240 Gbit/s (unidirectional) IPv4
Networks® MX80 3D router were demonstrated, as



                                                                Throughput and Performance
                                                                                               and IPv6 forwarding on a single
well as a set of provider edge services and capabili-                                          line card
ties, all focusing on its MX Series 3D routers.
                                                                                             Unicast Routing Performance
EANTC collaborated with Juniper’s technical                                                  ➜ 2.4 million IPv4 and 2.4 million
marketing engineers in creating a detailed test plan                                           IPv6 prefixes
that was executed in Juniper’s lab in Sunnyvale, Cali-
fornia. EANTC was able to independently validate the                                         Multicast Performance
new product’s capabilities prior to its public introduc-                                     ➜ Line-rate forwarding for (64
tion and found that the line card delivered the perfor-                                        through 1,518 bytes) on 71
mance and scalability advertised by Juniper.                                                   10GbE ports
                                                                                             ➜ 3,300 maximum outgoing inter-
Tested Devices and Test Equipment                                                              faces on a single line card
                                                                                             ➜ 60,000 multicast groups on
The breadth of the testing and the
                                                                                               each of the 11 10GbE ports
demonstration required several topolo-
gies, devices and operating system                                                           ➜ 660,000 total receiver groups
configurations to be used for the project:                                                     on a single line card

• MX480 3D router with six                                                                   ➜ Line-rate performance for 6,000
                                                              Scalability




  16x10GbE line cards running                                                                  L3VPNs with 2.4 million unicast
                                                               Service




  Junos 10.0-20090914.0 (pre-beta)                                                             VPN routes on a single line card
                                                                                             ➜ Line-rate performance for 6,000
• MX480 3D router with a single
  16x10GbE line card running Junos                                                             VPLS with 240,000a MAC
  10.0-20090914.0 (pre-beta)                                                                   addresses on a single line card
                                                                                             ➜ 25.34 watts per 10GbE with
                                                              Consumption




• MX80 3D router running Junos
                                                                                               line rate traffic
  10.1I20090821 (pre-alpha)
                                                                 Power




                                                                                             ➜ 3.38 watts per gigabit of
• Functional demonstrations were                                                               throughput
  carried out with up to four Juniper
  MX Series 3D routers (MX240 3D,
  MX480 3D, and MX960 3D
  routers) with a Multiservices DPC
  running Junos 9.6R1.13                                      a. Achieved with pre-beta software in the test. Juniper
Ixia’s IxNetwork 5.40 was used to conduct all testing.           stated that this is not the system maximum.




                                                     Page 1 of 8
Density, Forwarding, Scale                                         For service providers the implications of our findings
                                                                   are significant. Up until now, Juniper’s line cards could
Two groups of tests were focused on the new                        accommodate up to 24 10GbE ports in the same form
16x10GbE line cards introduced by Juniper. In the                  factor. With the introduction of the new 16x10GbE
first set of tests, we focused our attention on the Juniper        line card, we see a high-performance card that packs
MX480 3D router hosting a single 16x10GbE line                     three times the forwarding performance and four times
card. We characterized the performance and scal-                   the port density in comparison to the previous offered
ability of the new line cards for unicast (IPv4, IPv6,             configurations.
and a mixture of both). We also validated multicast
performance, Layer 3 VPN (L3VPN), and virtual
private LAN services scalability as well as routing                Density, Forwarding, Scale Test Highlights
performance of the new line cards.                                 ➜ 1.44 Tbit/s unidirectional throughput
Throughput Performance on a Fully Loaded                           ➜ Zero packet loss from 1 Billion+ pps
Chassis
A new line card, especially one with such impressive               Multicast Performance Characteristics
specifications as a 10GbE with 16 ports, requires a
                                                                   IP multicast plays a pivotal role in most IPTV deploy-
detailed performance analysis. We started our anal-
                                                                   ments, financial applications, and content distribution
ysis of Juniper’s 16x10GbE line card by investigating
                                                                   systems due to its highly efficient network resources
the performance of a fully loaded MX480 3D router.
                                                                   usage. The tests performed in this section character-
The MX480 3D router is an eight-RU (rack units) router             ized the 16x10GbE line card multicast replication
that can accommodate six line cards, in addition to                capabilities, group capacity, and forwarding perfor-
two Routing Engines (REs). We used exactly this                    mance for mixed unicast and multicast traffic.
configuration as defined in the test plan: two RE-S-
                                                                   Multicast Forwarding and Performance on a
2000 Routing Engines and six 16x10GbE line cards.
                                                                   Fully Loaded Chassis. The most challenging multi-
EANTC used standard methodology as defined in the
                                                                   cast test setup was an MX480 3D router with all 71
Internet Engineering Task Force (IETF) Request for
                                                                   ports on each of six 16x10GbE cards requesting multi-
Comments (RFC) 2544 for IPv4 throughput measure-
                                                                   cast traffic and with only one port as the source. In this
ments and RFC 5180 for IPv6 interconnected devices.
                                                                   setup, the majority of the traffic had to cross the back-
For this test we attached all the ports on the system
                                                                   plane and then further replicate on the individual line
(i.e., 96 10GbE ports) to the Ixia tester. Full-meshed
                                                                   cards.
traffic between all the 96 10 GbE ports was created
by the Ixia tester. Since we already knew that we                  In this test, we defined the first port on the system as a
could expect up to 240 Gbit/s unidirectional (120                  multicast source, sending to 1,200 multicast groups.
Gbit/s bidirectional) traffic from each line card, we              All remaining 71 ports joined all groups. We ran a
configured the ports to send traffic at 75-percent line            multicast throughput test for 64-, 73-, 128-, 512-,
rate, intending to validate the advertised forwarding              1,024-, and 1,518-byte frames and expected to
performance of the 16x10GbE MPC. Figure 1 depicts                  register zero frame loss for all frame sizes on all multi-
a sample line card configuration.                                  cast receiving ports.

                        Port Groups                                We ran the tests with all the frame sizes for the dura-
                                                                   tion of 120 seconds for each test iteration and vali-
       3               2          1                    0
                                                                   dated that the MX480 3D router, with its fully popu-
                                                                   lated line-card configuration, could replicate multicast
  3   2    1   0   3   2   1   0   3   2   1   0   3   2   1   0   traffic at line rate with zero packet loss.
                                                                   The results of the test show that the new line cards are
       Used Traffic Port
                                                                   capable of high-performance multicast replication.
          Figure 1: Line Card Test Setup Example                   Sending such small frame sizes as 64 and 73 bytes
                                                                   requires maximum multicast replication performance
For both IPv4 and IPv6 packet types, we found the                  from a device, and the MX480 3D router had no prob-
system capable of delivering 1.44 Tbit/s of full-mesh              lems delivering. For major multicast distribution centers
unidirectional traffic (720 Gbit/s bidirectional) at all           or extremely heavy financial trading applications,
tested frame sizes (from 64 bytes to 1,518 bytes) with             these results reassure that the router can fulfill their
zero packet loss.                                                  needs.


      EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 2 of 8
Tests Setup. The 16x10GbE line card supports 120                                   DSLAMs and must replicate the IPTV streams to all of
Gbit/s bidirectional (240 Gbit/s unidirectional)                                   them. For this reason, the number of outgoing inter-
forwarding performance. The rest of the tests                                      faces (OIF), multicast groups, and receivers that can
described in the report focused on the 16x10GbE line                               be supported is a critical aspect of the new line cards.
card and used 12 out of 16 ports operating at line
rate. Figure 2 depicts the configuration used for the                              We used two 16x10GbE line cards for this test. One
tests in the following sections.                                                   line card was used to source 8 multicast groups, each
                                                                                   at 3.98 Mbit/s (roughly standard definition channel
                              Port Groups                                          equivalent). On the second line card, we configured
            3                2          1                              0           300 VLANs on 11 receiving ports, each receiving all
                                                                                   8 groups. With 8 groups each sending 3.98 Mbit/s of
                                                                                   traffic in each VLAN we were expecting 31.8 Mbit/s
   3       2     1   0   3   2    1   0     3     2    1    0    3     2   1   0   per VLAN. All together, with 300 VLANs (and their
                                                                                   headers) on a 10GbE port, we observed 9.98 Gbit/s
           Used Traffic Port
           Unused Port
                                                                                   of traffic exiting each port while each VLAN had
                                                                                   receivers for all 8 groups. This meant that the system
           Full-Mesh Traffic Group
                                                                                   was receiving 26,400 IGMP joins and replicating the
                                                                                   8 multicast groups to 3,300 outgoing interfaces all on
               Figure 2: Single Line Card Test Setup                               a single line card.

Mixed Multicast and Unicast Forwarding. We                                         The results met the test’s goals. We were able to verify
ran an additional multicast test using a mixture of                                that all multicast receivers in all VLANs on all inter-
multicast and unicast traffic. One port was used to                                faces were receiving all 8 streams with no packet loss.
source multicast traffic while the rest of the ports (on a                         We reached 9.98 Gbit/s on the receiving interfaces
single 16x10GbE line card) were sending and                                        even when facing the most aggressive packet replica-
receiving full-mesh unicast traffic. This scenario mimics                          tion scenario of 64-byte frames.
closely real-world scenarios where multicast and
unicast are sent across interfaces at the same time.                               The test results demonstrated that the 16x10GbE line
                                                                                   card is highly capable at multicast replication and is
We were able to verify that all interfaces could                                   able to scale its outgoing interfaces. While the number
forward 50% multicast and 50% unicast traffic for all                              of VLANs used were on the high side for IPTV deploy-
packet sizes tested. Figure 2 depicts the results for a                            ments, media-rich financial applications, in which
single line card.                                                                  various traders are split across VLANs, could greatly
                                                                                   benefit from such scalability seen in the test.
         120.0
         100.0                                                                     Multicast group scalability. The goal of the test
          80.0                                                                     was to measure the number of multicast groups that an
Gbit/s




          60.0                                                                     MX480 3D router with two 16x10GbE line cards
                                                                                   could support. The test goal was simple: increase the
          40.0
                                                                                   number of multicast groups while forwarding traffic
          20.0                                                                     without losing a single frame.
           0.0
           Bytes 64              73       128         512       1024       1518    RFC 3918, which defines multicast benchmarking
                                                                                   methodology, dictates that to find the maximum group
                                          mcast       ucast                        number a system can service, the number of groups be
                                                                                   increased as long as no loss of frames is recorded on
                Figure 3: Mixed Multicast and Unicast                              the system under test. Instead of going through this
                             Throughput                                            tedious exercise, we asked Juniper to provide a target
                                                                                   value of multicast groups that EANTC should validate.
Outgoing Interfaces Scalability. Juniper posi-
tions the MX Series platforms as aggregation and                                   Juniper provided an impressive target value of 60,000
services edge routers. An MX Series router, therefore,                             multicast groups. With 11 ports, all requesting
is often required to serve as the last-hop router (the                             60,000 groups, we expected to register 660,000
router responsible for multicast traffic replication). For                         IGMP requests on the system and then forward
example, in a Triple Play deployment scenario, an MX                               packets, at various frame sizes (from 64 to 1,518
Series router will aggregate a large number of                                     bytes) at line rate.


          EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 3 of 8
We sent the IGMP join requests on all receiver ports        results met our expectations. For starters, the BGP
and validated that the system indeed registered all         peers remained stable for the duration of the tests. We
660,000 requests. We then started the multicast traffic     were also able to send traffic at line rate for all
for all groups and let traffic run for 120 seconds. We      prefixes without losing a single frame.
recorded zero packets loss in all test runs for all frame
sizes.                                                      L3VPN scalability. We used the same hardware
                                                            configuration to perform the L3VPN and virtual private
This IGMP scaling figure far exceeded today’s appli-        LAN service (VPLS) scalability tests. We divided the 12
cation requirements. The number of groups reached           ports into two sets: 6 were configured as customer-
provide ample headroom for innovative, media-rich           facing and 6 as backbone-facing. On each of the six
financial and transactional (stock ticker) services for     customer-facing ports Juniper engineers configured
high-end financial institutions in the next five to ten     1,000 VLANs, each representing a customer and
years. Such high scaling values validate that the           each establishing an eBGP neighbor with the tester. In
MX480 3D router with the 16x10GbE line card is,             total we had 6,000 emulated customer connections set
from a multicast group scalability perspective, future-     up for the test. Attached to the six ports we defined as
proof and provides headroom for future multicast-           upstream (i.e., network-facing), we used Ixia’s IxNet-
intensive applications.                                     work to emulate six provider (P) routers and six
                                                            provider edge (PE) routers. Each of the emulated PE
Multicast Performance Test Highlights
                                                            routers was configured with the corresponding Virtual
➜ Line-rate multicast forwarding performance                Routing and Forwarding (VRF) instance to match the
                                                            customer attached to the downstream ports.
➜ 3,300 outgoing interfaces on a single line card
➜ 60,000 IGMP groups with a single source
                                                              CE
                                                                   eBGP                               OSPF+LDP   P   PE
➜ 660,000 multicast receivers on a single line card
                                                              CE   eBGP                               OSPF+LDP   P   PE

Services Scale
                                                                              Downstream
                                                                                                      OSPF+LDP


                                                                                           Upstream
                                                              CE   eBGP                                          P   PE
Forwarding performance, multicast replication, and
scalability are the building blocks with which service        CE   eBGP                               OSPF+LDP   P   PE
providers can offer services to their customers. Once
we established a baseline for the performance of the          CE   eBGP                               OSPF+LDP   P   PE
new line cards, we investigated their services scal-
ability.                                                      CE   eBGP                               OSPF+LDP   P   PE
BGP scalability. We started by performing two
throughput tests, each with an aggregate of 2.4                            Emulated by Ixia IxNetwork
million BGP routes: one test for IPv4 and one for IPv6.                    Line Card in Test
Both tests used the same hardware, which included an
MX480 3D router, a single 16x10GbE line card, and                  Figure 4: L3VPN Scalability Test Setup
two Routing Engines.
                                                            Each of the emulated 6,000 customers advertised 200
We configured 12 BGP peers, one for each of the 12          prefixes on each end of the VRF, which equaled to 2.4
ports tested, and exchanged 200,000 prefixes on             million routes. Once all eBGP sessions were estab-
every port. The IPv6 test used /64 routes. Two IPv4         lished and routes were exchanged, we performed a
routes configurations were used: one with /30 routes        throughput test using a common Internet mix (Imix) of
and the second with a more diverse set ranging from         frame sizes: 64 bytes, 576 bytes, 1,400 bytes and
 /24 to /29 routes. Once we established the BGP             1,470 bytes.
neighbors between the tester and router ports, we
                                                            The router showed no hesitation in establishing the
advertised the routes and sent traffic for all prefixes.
                                                            eBGP sessions and maintaining them. The 2.4 million
We repeated all three tests for 64-, 73-, 128-, 512-,
                                                            VRF routes were learned and installed in the
1,024- and 1,518-byte packets. The IPv6 packets
                                                            forwarding table after several minutes and remained
sizes were almost identical apart from the smallest
                                                            unchanged for the duration of the test. We sent traffic
packets, which were 82 bytes.
                                                            to all VRF prefixes using the Imix described above and
We expected the router to be able to forward traffic to     recorded zero packet loss from all 2.6 billion packets
all routes at all frame sizes for both IPv4 and IPv6. The   sent.


    EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 4 of 8
The test demonstrated the scalability possibilities for     Network providers are free to choose between the two
service providers to deploy MPLS-based L3VPNs on            services and can expect the same performance
Juniper‘s MX Series 3D platform with the 16x10GbE           regardless of their choice.
line card. The tested service density would allow a
service provider to offer 6,000 20 Mbit/s VPN               MX80 3D Functional Validation
endpoints on a single line card. This demonstrates true
                                                            Juniper introduced us to a new, 2-RU platform called
economy of scale for deployments in heavily popu-
                                                            the MX80 3D router. The router comes built with a
lated small and medium business areas, such as New
                                                            fixed Routing Engine and Forwarding Engine Base
York City and Hong Kong.
                                                            (FEB), as well as built-in 4x10GbE ports. The MX80
VPLS scalability. In recent years, Ethernet-based           3D router also has two pluggable Media Interface
services have been experiencing growing success,            Card (MIC) slots (not used in these tests). Juniper
due in part to the success of the Metro Ethernet Forum      explained that the MX80 3D router is targeted at
(MEF). VPLS is one way to offer multipoint-to-multipoint    providers interested in extending their service reach
Ethernet connectivity to customers wishing to connect       through a distributed service delivery model; for
several sites together in a simple way. Since the           example, a VPLS multi-tenant unit (MTU) platform.
provider network acts as an Ethernet switch in VPLS
                                                            As a member of the Juniper MX Series 3D family, the
deployments, two aspects are critical for the success of
                                                            MX80 3D router, with its lean form factor, is also able
the service: the number of virtual switch instances
                                                            to support L3VPNs and VPLS. In this functional demon-
(VSIs) and MAC addresses to which the routers can
                                                            stration we verified that the new platform is able to
scale.
                                                            provide the same two services that we tested with the
This test was configured in a similar way to the L3VPN      larger 8-RU MX480 3D router using the 16x10GbE
test: 6 ports were defined as customer-facing, each         line card.
configured with 1,000 VLANs, and 6 ports were
defined as network-facing. Each emulated customer
was sourcing Ethernet traffic from 20 MAC addresses,
which meant that in total 20,000 MAC addresses had
to be learned on each customer-facing port and each
network-facing port. This totaled 240,000 MAC
addresses on a single line card.
We repeated the same procedure and configuration                         Figure 5: Juniper MX80 3D
that was used in the L3VPN test. Again we performed
                                                            L3VPN Services. Much like the rest of the product
a throughput test using the Imix previously described,
                                                            family running Juniper Networks Junos® software, the
and monitored for lost frames.
                                                            MX80 3D router is also a capable router. We
And the results did not disappoint. None of the             repeated the procedure described above with the
millions of frames sent in the test were lost under these   MX480 3D router, reducing the number of customer-
extreme traffic load and service scale conditions.          facing ports to two and the network-facing ports to
                                                            two. These four ports are, after all, an integrated part
Services Scale Test Highlights                              of the device and hence users could expect to use
                                                            them with the for services.
➜ 6,000 L3VPN instances on a single line card
➜ 2.4 million unique active VPN routes                      We configured 500 VLANs on each of the customer-
                                                            facing ports and set up an eBGP neighbor in all of
➜ 6,000 BGP sessions                                        them. Each eBGP neighbor advertised 1,000 routes,
➜ Zero packet loss                                          which meant that in total the MX80 3D router had to
                                                            support a million VPN routes. We then sent line-rate
➜ 6,000 VPLS instances on a single line card                traffic to all the routes using the same Internet traffic
➜ 240,000 MAC address per line card                         mix defined above and verified that no frames were
                                                            lost for line-rate traffic.
➜ Zero packet loss
                                                            Despite its compact 2-RU footprint, the MX80 3D
                                                            router was not phased by the number of BGP sessions
Both VPLS and L3VPN tests showed that network               it had to maintain, the packet forwarding it had to
providers using Juniper’s new 16x10GbE line card            process, or the packet rate. We recorded no frame
can expect impressive scalability from their hardware.      loss.


    EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 5 of 8
VPLS Services. We subsequently tested VPLS                 loads that must be used while power is measured. The
services on the MX80 3D router. A similar test configu-    traffic loads specified are 100, 50, and 0 percent,
ration was used again, this time with 250 VLANs on         measured at 20-minute interval each.
each of the two customer-facing ports and with 20
                                                           To accurately measure the power drawn by the router,
MAC addresses on each VSI endpoint. This configura-
                                                           we connected the four Power Entry Modules (PEMs) to
tion totaled 500 VPLS instances and 20,000 MAC
                                                           a power distribution unit, which allowed us to obtain
addresses in a 2-RU form factor. Juniper informed us
                                                           power usage measurements while running the test
that this does not represent the system’s maximum
                                                           traffic. We used Ixia’s IxNetwork to generate test
MAC address capacity as we did not investigate the
                                                           traffic and measured the power output while sending
size of the MX80 3D MAC table.
                                                           test traffic for 20 minutes. Three power draw measure-
We used the same traffic mix to verify that all VPLS       ments were performed, once with 720 Gbit/s full
instances were able to switch frames and serve the         duplex (100% load), once with 360 Gbit/s full duplex
500 emulated customers. Once again, we were not            (50% load), and once with no traffic load.
disappointed. The router functioned as expected,
                                                           The results are ECR values that describe the amount of
learning the required number of MAC addresses, and
                                                           energy consumed by the device in order to move one
switching all frames at line rate.
                                                           gigabit of line-level data per second. We found that
Both tests demonstrate Layer 2 and Layer 3 service         the ECR ratings per Gbit/s was 3.38 watts and that
capabilities in a compact form factor that can comfort-    the average power consumption per 100-percent load
ably cater to service-rich distributed PE and MTU          was 2433.6 watts.
deployments.

MX80 3D Functional Validation Highlights
                                                                             Values in Watts
➜ 1,000 L3VPNs, 1 million VPN routes




                                                                                                 2,433.6
                                                                                      2,320.8
➜ 500 VPLS instances with 20,000 MAC addresses


                                                                            2,180.4
➜ Line-rate Imix forwarding for both services


Power Efficiency


                                                                                                 100% Load
The amount of energy (or power) a system uses to
                                                                                      50% Load
transport data has been a topic of growing interest in
recent years. Energy efficiency translates to savings in
                                                                            Idle




energy costs, which leads to reduction in operating
costs and in some countries, to healthy government
subsidies.
                                                                    Figure 6: Full Chassis MX480 3D
Test procedures and measurement methodologies are
                                                                          Power Consumption
defined by several industry organizations, such as the
Alliance for Telecommunications Industry Solutions
(ATIS) and by the Energy Consumption Rating (ECR)
Initiative. The latter is a partnership between Juniper
Networks, Ixia Corporation, and Lawrence Berkeley
National Lab. For this test, we used the definition
provided by the ECR document (version 1.0.4,
November 2008).
The procedure defined by the ECR consists of four
steps for determining the energy consumption of a
system when forwarding packets. The first step is to
determine the maximum "zero-packet-loss" throughput
performance of the system. This step was performed
by our extensive investigation of an MX480 3D system
fully loaded with 16x10GbE line cards. The next three
aspects of the specification define the specific system


    EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 6 of 8
Functional Demonstrations                                  EANTC was then able to observe traffic sent between
                                                           the two VPN endpoints and verify that all traffic was
Juniper’s MX Series 3D product portfolio is positioned     received with no packet loss. Once the VPN traffic
as aggregation and edge routers in the company’s           was verified, we monitored the route count on the MS-
arsenal. As such, the routers must often satisfy a wide    DPC and verified that upon starting to send and
range of requirements that are specific to the network     receive traffic from the emulated Internet access port,
edge. These requirements range from subscriber             all routes were registered.
management to supporting metro rollouts and beyond.
Juniper used this opportunity to demonstrate a range       DHCP Broadband Subscriber Access to
of applications based on the MX Series 3D portfolio        L3VPNs. The functionality demonstrated in this test
using a pre-released version of the Junos operating        allows service providers to offer carrier‘s carrier
system and 4x10GbE line cards.                             services. When service providers lack geographical
                                                           reachability, they often buy access from a second
NAT Services for Business VPN Customers.                   service provider. VPN customers that belong to the first
Business customers require VPN connectivity services       provider must, therefore, traverse a different network
between multiple sites while often requiring Internet      before reaching the service provider that is actually
access. BGP/MPLS IP VPNs deliver intra-site connec-        providing them with the VPN.
tivity sometimes using private addressing schemes to
conserve public addresses. Since these addresses are       In this demonstration, DHCP subscribers (emulated by
not Internet-routable, access to the Internet becomes an   the Ixia tester) were attached to a router that belonged
issue for business VPN customers using such                to one service provider (PE1). PE1 served as a DHCP
addresses.                                                 relay agent that intercepted the DHCP requests. PE1
                                                           then associated each incoming DHCP request (based
Network Address Translation (NAT) is a solution for        on the port) to a pre-configured user prefix and sent a
this problem. NAT is a process that is often performed     request to its own Radius server. The Radius server
by specialized devices that modify network addresses       then authenticated the request against the service
in IP packets from one address space to another. NAT       provider‘s database. The authentication instructed PE1
could be done on customer premises equipment (CPE)         to forward the DHCP request to an L3VPN.
or firewalls, but why not perform the same service by
                                                           In the wholesale scenario, we validated that the DHCP
the same network provider offering the VPN services?
                                                           request was authenticated and the subscriber admitted
Juniper set out to demonstrate just this idea: move the    into the VPN. The actual IP address assignment to the
NAT solution to the network and implement it on the        emulated customer came from the DHCP and Radius
PE router, the same router that is the business            servers that belonged to the retail service provider,
customer’s point of entry to the network.                  using the wholesaler’s access infrastructure.
In this demonstration, Juniper used the Multiservices      The main focus of the test was to validate the function-
Dense Port Concentrator (MS-DPC) to perform the NAT        ality delivered by PE1. The router had to intercept the
service function. NAT service is only available with       DHCP requests delivered from the 32,000 emulated
stateful firewall. EANTC configured the tester to          active subscribers that were configured. It then had to
emulate two VPN endpoints attached to PE1 and PE2          verify that the DHCP request was allowed to enter the
(as seen in the figure below), and another tester port     network, receive information from the Radius server
was configured as an Autonomous System Boundary            regarding what to do with the DHCP request (in this
Router (ASBR) advertising Internet routes to the           case, forward it to the VPN), and then send the DHCP
network.                                                   request further.
                                                           Even though we used 32,000 concurrent active
                                                           subscribers in the demonstration, the router had no
Ixia IxNetwork VRF-A                 Ixia IxNetwork        problem performing the actions described above and
Load Generator                       Load Generator        assigning VPN membership upon authentication.
               Juniper MX960 3D
                      PE1
                    L3VPN         VRF-A

Juniper MX240 3D            Juniper MX960 3D
       PE3                         PE2
             10 Gigabit Ethernet
     Figure 7: Wholesale VPN Service with NAT



    EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 7 of 8
Metro Ethernet Functional Demonstrations                   MX960 router, in this case) with two link members
                                                           each terminating different VPLS-PE routers (PE3 and
Having participated in two demonstrations that were
                                                           PE4). The two VPLS-PE routers used the Inter-Chassis
focused on L3VPN services, we proceeded to two
functional validations that were centered around metro     Control Protocol (ICCP) to exchange status informa-
Ethernet rollouts. Juniper demonstrated a mechanism        tion, in addition to the Bidirectional Fault Detection
that can rid provider networks from spanning tree          (BFD) protocol.
protocols and improve Ethernet resiliency, while           To verify that the MC-LAG worked as a resiliency
removing the requirement for backbones to learn a          mechanism, we disconnected the active link bundle
large number of MAC addresses.                             member while sending bidirectional Ethernet traffic
PBB-VPLS MAC address hiding for BGP-VPLS.                  through the test topology. We then measured the
Provider Backbone Bridge (PBB) allows service              amount of frames that were lost due to our simulated
providers to offer global Ethernet services for existing   link failure. We used this measurement to deduce that
Layer 2 islands without running into MAC address           the service was only briefly interrupted and success-
scalability issues. In essence, what PBB brings to the     fully switched over to the alternate link member of the
table is the ability to hide the MAC addresses             MC-LAG without the use of any spanning tree
belonging to these Layer 2 islands from the rest of the    protocol.
network. In addition, PBB interworking with VPLS helps     In this functional demonstration EANTC validated that
to further contain the MAC table size when multipoint      MC-LAG delivered on Juniper’s promise of Ethernet
access is extended beyond the metro with VPLS and/         resiliency without the use of spanning tree protocols.
or H-VPLS.
Juniper demonstrated the ability to map encapsulate        Conclusions
Ethernet traffic to IEEE provider backbone bridge (PBB,
                                                           The capabilities enabled by the Junos Trio chipset-
IEEE 802.1ah-2008) and forward the traffic into VPLS
                                                           based 16x10GbE MPC line card and MX80 3D
instances.
                                                           router were very impressive. EANTC sees Juniper
Multichassis LAG for cross-chassis resiliency.             pushing new boundaries concurrently for Ethernet
Using multichassis link aggregation group (MC-LAG),        density, services, and subscriber scale, as well as
service providers, cable companies, and enterprises        power efficiency.
can offer customers a cross-platform resiliency mecha-
                                                           The fact that the Junos Trio chipset and the same Junos
nism for their Layer 2 services. MC-LAG, combined
                                                           operating system is supported across multiple product
with VPLS, removes the service provider’s reliance on
                                                           lines enables synergies for network operators. They
spanning tree protocols while providing both resil-
                                                           can now support a common set of functionalities
iency and loop prevention mechanisms.                      throughout the core, aggregation, and edge parts of
                                                           the network.

                                                           About EANTC
Ixia IxNetwork                                                                      The    European       Advanced
Load Generator           Juniper MX960 3D                                           Networking       Test    Center
                                 CE
                                                                                    (EANTC) offers vendor-neutral
                              VPLS
                                                                                    network test services for manu-
                                                                                    facturers, service providers
           Juniper MX240 3D
                  PE3
                                      Juniper MX240 3D                              and enterprise customers.
                                             PE4
                                                                                    Primary      business     areas
                                                                                    include        interoperability,
                        Juniper MX960 3D
Ixia IxNetwork                 PE2                                                  conformance       and    perfor-
Load Generator                                                                      mance testing for IP, MPLS,
             10 Gigabit Ethernet                                                    Mobile      Backhaul,      VoIP,
                                                           Carrier Ethernet, Triple play, and IP applications.
     Figure 8: Multichassis LAG Test Topology
                                                           EANTC AG Einsteinufer 17, 10587 Berlin, Germany
Juniper demonstrated MC-LAG using the configuration        info@eantc.com, http://www.eantc.com/
described in Figure 2. A link aggregation bundle was
configured on the customer edge device (a Juniper          V1.2, 20091026



    EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 8 of 8

Contenu connexe

Tendances

2.5G, second and half generation, All about 2.5..
2.5G, second and half generation, All about 2.5..2.5G, second and half generation, All about 2.5..
2.5G, second and half generation, All about 2.5..Muhammad Ahad
 
Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...
Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...
Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...Fisnik Kraja
 
PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...
PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...
PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...Journal For Research
 
Report :- MIMO features In WiMAX and LTE: An Overview
Report :- MIMO features In WiMAX and LTE: An OverviewReport :- MIMO features In WiMAX and LTE: An Overview
Report :- MIMO features In WiMAX and LTE: An OverviewPrav_Kalyan
 
40GbE Server Inter-Connect
40GbE Server Inter-Connect40GbE Server Inter-Connect
40GbE Server Inter-ConnectNaoto MATSUMOTO
 
Packet Reordering Response for MPTCP under Wireless Heterogeneous Environment
Packet Reordering Response for MPTCP under Wireless Heterogeneous EnvironmentPacket Reordering Response for MPTCP under Wireless Heterogeneous Environment
Packet Reordering Response for MPTCP under Wireless Heterogeneous EnvironmentCommunication Systems & Networks
 
OLPC Mesh networking improvements
OLPC Mesh networking improvementsOLPC Mesh networking improvements
OLPC Mesh networking improvementsOSLL
 
MIMO Features In WiMAX and LTE
MIMO Features In WiMAX and LTEMIMO Features In WiMAX and LTE
MIMO Features In WiMAX and LTEPrav_Kalyan
 
VTC-location based channel estimation for massive full-dimensional MIMO systems
VTC-location based channel estimation for massive full-dimensional MIMO systemsVTC-location based channel estimation for massive full-dimensional MIMO systems
VTC-location based channel estimation for massive full-dimensional MIMO systemsQian Han
 
Cisco crs1
Cisco crs1Cisco crs1
Cisco crs1wjunjmt
 
Telco evangelist 23 jan 2013
Telco evangelist 23 jan 2013Telco evangelist 23 jan 2013
Telco evangelist 23 jan 2013David Chambers
 
VLSI Implementation of OFDM Transceiver for 802.11n systems
VLSI Implementation of OFDM Transceiver for 802.11n systemsVLSI Implementation of OFDM Transceiver for 802.11n systems
VLSI Implementation of OFDM Transceiver for 802.11n systemsIJERA Editor
 
Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...
Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...
Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...ambitlick
 
PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich
PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich
PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich PROIDEA
 

Tendances (19)

2.5G, second and half generation, All about 2.5..
2.5G, second and half generation, All about 2.5..2.5G, second and half generation, All about 2.5..
2.5G, second and half generation, All about 2.5..
 
Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...
Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...
Parallelization Techniques for the 2D Fourier Matched Filtering and Interpola...
 
02 DOCSIS Nik amended 1
02 DOCSIS Nik amended 102 DOCSIS Nik amended 1
02 DOCSIS Nik amended 1
 
IMT Advanced
IMT AdvancedIMT Advanced
IMT Advanced
 
PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...
PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...
PERFORMANCE ANALYSIS OF QOS PARAMETERS LIKE PSNR, MAE & RMSE USED IN IMAGE TR...
 
Report :- MIMO features In WiMAX and LTE: An Overview
Report :- MIMO features In WiMAX and LTE: An OverviewReport :- MIMO features In WiMAX and LTE: An Overview
Report :- MIMO features In WiMAX and LTE: An Overview
 
40GbE Server Inter-Connect
40GbE Server Inter-Connect40GbE Server Inter-Connect
40GbE Server Inter-Connect
 
Packet Reordering Response for MPTCP under Wireless Heterogeneous Environment
Packet Reordering Response for MPTCP under Wireless Heterogeneous EnvironmentPacket Reordering Response for MPTCP under Wireless Heterogeneous Environment
Packet Reordering Response for MPTCP under Wireless Heterogeneous Environment
 
OLPC Mesh networking improvements
OLPC Mesh networking improvementsOLPC Mesh networking improvements
OLPC Mesh networking improvements
 
Na2522282231
Na2522282231Na2522282231
Na2522282231
 
MIMO Features In WiMAX and LTE
MIMO Features In WiMAX and LTEMIMO Features In WiMAX and LTE
MIMO Features In WiMAX and LTE
 
VTC-location based channel estimation for massive full-dimensional MIMO systems
VTC-location based channel estimation for massive full-dimensional MIMO systemsVTC-location based channel estimation for massive full-dimensional MIMO systems
VTC-location based channel estimation for massive full-dimensional MIMO systems
 
Cisco crs1
Cisco crs1Cisco crs1
Cisco crs1
 
Telco evangelist 23 jan 2013
Telco evangelist 23 jan 2013Telco evangelist 23 jan 2013
Telco evangelist 23 jan 2013
 
5G
5G5G
5G
 
05 network
05 network05 network
05 network
 
VLSI Implementation of OFDM Transceiver for 802.11n systems
VLSI Implementation of OFDM Transceiver for 802.11n systemsVLSI Implementation of OFDM Transceiver for 802.11n systems
VLSI Implementation of OFDM Transceiver for 802.11n systems
 
Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...
Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...
Enhancing Downlink Performance in Wireless Networks by Simultaneous Multiple ...
 
PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich
PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich
PLNOG 8: Norbert Gulczyński - 100G w sieciach szkieletowych i miejskich
 

En vedette

Refactoring Away from Test Hell
Refactoring Away from Test HellRefactoring Away from Test Hell
Refactoring Away from Test HellAlastair Smith
 
Enterprise Edge Router Upgrade Guide
Enterprise Edge Router Upgrade GuideEnterprise Edge Router Upgrade Guide
Enterprise Edge Router Upgrade GuideRouter Analysis, Inc.
 
Test Report: Eudemon 8000
Test Report:  Eudemon 8000Test Report:  Eudemon 8000
Test Report: Eudemon 8000IT Brand Pulse
 
Huawei All-in-one Box IPCC Solution
Huawei All-in-one Box IPCC SolutionHuawei All-in-one Box IPCC Solution
Huawei All-in-one Box IPCC SolutionMundo Contact
 
G2 server - Cloud není vhodný pro každého
G2 server - Cloud není vhodný pro každéhoG2 server - Cloud není vhodný pro každého
G2 server - Cloud není vhodný pro každéhoMarketingArrowECS_CZ
 
Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...
Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...
Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...MarketingArrowECS_CZ
 
Delivering Services in a World of Many Clouds
Delivering Services in a World of Many CloudsDelivering Services in a World of Many Clouds
Delivering Services in a World of Many CloudsCisco Service Provider
 
Cisco pat adamiak navigating with a world of many clouds
Cisco pat adamiak   navigating with a world of many cloudsCisco pat adamiak   navigating with a world of many clouds
Cisco pat adamiak navigating with a world of many cloudsKhazret Sapenov
 
PCE, OpenFlow, & the Centralized Control Plane
PCE, OpenFlow, & the Centralized Control PlanePCE, OpenFlow, & the Centralized Control Plane
PCE, OpenFlow, & the Centralized Control PlaneMetaswitch NTD
 
Spirent 400G Ethernet Test Solution - A Technical Overview
Spirent 400G Ethernet Test Solution - A Technical OverviewSpirent 400G Ethernet Test Solution - A Technical Overview
Spirent 400G Ethernet Test Solution - A Technical OverviewSailaja Tennati
 
From virtual to high end HW routing for the adult
From virtual to high end HW routing for the adultFrom virtual to high end HW routing for the adult
From virtual to high end HW routing for the adultMarketingArrowECS_CZ
 
Staying One Step Ahead with Zero-Day Protection
Staying One Step Ahead with Zero-Day ProtectionStaying One Step Ahead with Zero-Day Protection
Staying One Step Ahead with Zero-Day ProtectionMarketingArrowECS_CZ
 
Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1Sean Andersen
 
Check Point vSEC - Bezpečnostní řešení pro moderní datová centra
Check Point vSEC - Bezpečnostní řešení pro moderní datová centraCheck Point vSEC - Bezpečnostní řešení pro moderní datová centra
Check Point vSEC - Bezpečnostní řešení pro moderní datová centraMarketingArrowECS_CZ
 
Open management interfaces for NFV
Open management interfaces for NFVOpen management interfaces for NFV
Open management interfaces for NFVAnees Shaikh
 
Ключевые тенденции отрасли в последнее время
Ключевые тенденции отрасли в последнее времяКлючевые тенденции отрасли в последнее время
Ключевые тенденции отрасли в последнее времяSkillFactory
 
Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit
Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit
Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit kimw001
 

En vedette (19)

Refactoring Away from Test Hell
Refactoring Away from Test HellRefactoring Away from Test Hell
Refactoring Away from Test Hell
 
Enterprise Edge Router Upgrade Guide
Enterprise Edge Router Upgrade GuideEnterprise Edge Router Upgrade Guide
Enterprise Edge Router Upgrade Guide
 
Test Report: Eudemon 8000
Test Report:  Eudemon 8000Test Report:  Eudemon 8000
Test Report: Eudemon 8000
 
Huawei All-in-one Box IPCC Solution
Huawei All-in-one Box IPCC SolutionHuawei All-in-one Box IPCC Solution
Huawei All-in-one Box IPCC Solution
 
G2 server - Cloud není vhodný pro každého
G2 server - Cloud není vhodný pro každéhoG2 server - Cloud není vhodný pro každého
G2 server - Cloud není vhodný pro každého
 
Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...
Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...
Zabezpečení softwarově definovaných datových center prostřednictvím Check Poi...
 
Delivering Services in a World of Many Clouds
Delivering Services in a World of Many CloudsDelivering Services in a World of Many Clouds
Delivering Services in a World of Many Clouds
 
Veeam - Dostupnost Always-On
Veeam - Dostupnost Always-On Veeam - Dostupnost Always-On
Veeam - Dostupnost Always-On
 
Cisco pat adamiak navigating with a world of many clouds
Cisco pat adamiak   navigating with a world of many cloudsCisco pat adamiak   navigating with a world of many clouds
Cisco pat adamiak navigating with a world of many clouds
 
PCE, OpenFlow, & the Centralized Control Plane
PCE, OpenFlow, & the Centralized Control PlanePCE, OpenFlow, & the Centralized Control Plane
PCE, OpenFlow, & the Centralized Control Plane
 
Spirent 400G Ethernet Test Solution - A Technical Overview
Spirent 400G Ethernet Test Solution - A Technical OverviewSpirent 400G Ethernet Test Solution - A Technical Overview
Spirent 400G Ethernet Test Solution - A Technical Overview
 
From virtual to high end HW routing for the adult
From virtual to high end HW routing for the adultFrom virtual to high end HW routing for the adult
From virtual to high end HW routing for the adult
 
Staying One Step Ahead with Zero-Day Protection
Staying One Step Ahead with Zero-Day ProtectionStaying One Step Ahead with Zero-Day Protection
Staying One Step Ahead with Zero-Day Protection
 
Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1
 
Check Point vSEC - Bezpečnostní řešení pro moderní datová centra
Check Point vSEC - Bezpečnostní řešení pro moderní datová centraCheck Point vSEC - Bezpečnostní řešení pro moderní datová centra
Check Point vSEC - Bezpečnostní řešení pro moderní datová centra
 
Open management interfaces for NFV
Open management interfaces for NFVOpen management interfaces for NFV
Open management interfaces for NFV
 
SandBlast Agent
SandBlast AgentSandBlast Agent
SandBlast Agent
 
Ключевые тенденции отрасли в последнее время
Ключевые тенденции отрасли в последнее времяКлючевые тенденции отрасли в последнее время
Ключевые тенденции отрасли в последнее время
 
Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit
Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit
Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit
 

Similaire à Juniper Networks MX Series 3D with Junos Trio Chipset - EANTC Report

A Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System AssessmentA Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System AssessmentJuniper Networks
 
Navigating dc architectures tech&sales
Navigating dc architectures tech&salesNavigating dc architectures tech&sales
Navigating dc architectures tech&salesEric Zhaohui Ji
 
An FPGA for high end Open Networking
An FPGA for high end Open NetworkingAn FPGA for high end Open Networking
An FPGA for high end Open Networkingrinnocente
 
Testing Network Routers for Extreme Scale and Performance
Testing Network Routers for Extreme Scale and Performance Testing Network Routers for Extreme Scale and Performance
Testing Network Routers for Extreme Scale and Performance Sailaja Tennati
 
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioFast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioOPNFV
 
7.) convergence (w automation)
7.) convergence (w automation)7.) convergence (w automation)
7.) convergence (w automation)Jeff Green
 
Continuum PCAP
Continuum PCAP Continuum PCAP
Continuum PCAP rwachsman
 
Shunra VE Network Appliance
Shunra VE Network ApplianceShunra VE Network Appliance
Shunra VE Network ApplianceShunra Software
 
cisco-n3k-c31108tc-v-datasheet.pdf
cisco-n3k-c31108tc-v-datasheet.pdfcisco-n3k-c31108tc-v-datasheet.pdf
cisco-n3k-c31108tc-v-datasheet.pdfHi-Network.com
 
I Lab3 I Lab Testcenteroverview
I Lab3 I Lab TestcenteroverviewI Lab3 I Lab Testcenteroverview
I Lab3 I Lab Testcenteroverviewimec.archive
 
cisco-n3k-c31108pc-v-datasheet.pdf
cisco-n3k-c31108pc-v-datasheet.pdfcisco-n3k-c31108pc-v-datasheet.pdf
cisco-n3k-c31108pc-v-datasheet.pdfHi-Network.com
 
5G NR_optimizing RAN design architecture to support new standards.pdf
5G NR_optimizing RAN design architecture to support new standards.pdf5G NR_optimizing RAN design architecture to support new standards.pdf
5G NR_optimizing RAN design architecture to support new standards.pdfssuser5c980e1
 
cisco-n9k-c93108tc-ex-datasheet.pdf
cisco-n9k-c93108tc-ex-datasheet.pdfcisco-n9k-c93108tc-ex-datasheet.pdf
cisco-n9k-c93108tc-ex-datasheet.pdfHi-Network.com
 

Similaire à Juniper Networks MX Series 3D with Junos Trio Chipset - EANTC Report (20)

A Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System AssessmentA Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System Assessment
 
Basic 5G.pdf
Basic 5G.pdfBasic 5G.pdf
Basic 5G.pdf
 
Navigating dc architectures tech&sales
Navigating dc architectures tech&salesNavigating dc architectures tech&sales
Navigating dc architectures tech&sales
 
Новые коммутаторы QFX10000. Технология JunOS Fusion
Новые коммутаторы QFX10000. Технология JunOS FusionНовые коммутаторы QFX10000. Технология JunOS Fusion
Новые коммутаторы QFX10000. Технология JunOS Fusion
 
An FPGA for high end Open Networking
An FPGA for high end Open NetworkingAn FPGA for high end Open Networking
An FPGA for high end Open Networking
 
Vyatta 3500 Datasheet
Vyatta 3500 DatasheetVyatta 3500 Datasheet
Vyatta 3500 Datasheet
 
Testing Network Routers for Extreme Scale and Performance
Testing Network Routers for Extreme Scale and Performance Testing Network Routers for Extreme Scale and Performance
Testing Network Routers for Extreme Scale and Performance
 
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioFast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
 
7.) convergence (w automation)
7.) convergence (w automation)7.) convergence (w automation)
7.) convergence (w automation)
 
Continuum PCAP
Continuum PCAP Continuum PCAP
Continuum PCAP
 
Shunra VE Network Appliance
Shunra VE Network ApplianceShunra VE Network Appliance
Shunra VE Network Appliance
 
cisco-n3k-c31108tc-v-datasheet.pdf
cisco-n3k-c31108tc-v-datasheet.pdfcisco-n3k-c31108tc-v-datasheet.pdf
cisco-n3k-c31108tc-v-datasheet.pdf
 
9500 mxc ds
9500 mxc ds9500 mxc ds
9500 mxc ds
 
I Lab3 I Lab Testcenteroverview
I Lab3 I Lab TestcenteroverviewI Lab3 I Lab Testcenteroverview
I Lab3 I Lab Testcenteroverview
 
cisco-n3k-c31108pc-v-datasheet.pdf
cisco-n3k-c31108pc-v-datasheet.pdfcisco-n3k-c31108pc-v-datasheet.pdf
cisco-n3k-c31108pc-v-datasheet.pdf
 
5G NR_optimizing RAN design architecture to support new standards.pdf
5G NR_optimizing RAN design architecture to support new standards.pdf5G NR_optimizing RAN design architecture to support new standards.pdf
5G NR_optimizing RAN design architecture to support new standards.pdf
 
PICK N MX FLYER
PICK N MX FLYERPICK N MX FLYER
PICK N MX FLYER
 
What is 3d torus
What is 3d torusWhat is 3d torus
What is 3d torus
 
ARUBA 8400 Series
ARUBA 8400 SeriesARUBA 8400 Series
ARUBA 8400 Series
 
cisco-n9k-c93108tc-ex-datasheet.pdf
cisco-n9k-c93108tc-ex-datasheet.pdfcisco-n9k-c93108tc-ex-datasheet.pdf
cisco-n9k-c93108tc-ex-datasheet.pdf
 

Plus de Sergii Liventsev

Multiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNAT
Multiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNATMultiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNAT
Multiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNATSergii Liventsev
 
Allot Communication MobileTrends Report 02/2013
Allot Communication MobileTrends Report 02/2013Allot Communication MobileTrends Report 02/2013
Allot Communication MobileTrends Report 02/2013Sergii Liventsev
 
Решения по безопасности Juniper
Решения по безопасности JuniperРешения по безопасности Juniper
Решения по безопасности JuniperSergii Liventsev
 
Mykonos лучший способ защитить ваш web сайт и web-приложение от атак
Mykonos лучший способ защитить ваш web сайт и web-приложение от атакMykonos лучший способ защитить ваш web сайт и web-приложение от атак
Mykonos лучший способ защитить ваш web сайт и web-приложение от атакSergii Liventsev
 
Juniper Content Delivery Network
Juniper Content Delivery NetworkJuniper Content Delivery Network
Juniper Content Delivery NetworkSergii Liventsev
 
Juniper scalable NAT-solution
Juniper scalable NAT-solutionJuniper scalable NAT-solution
Juniper scalable NAT-solutionSergii Liventsev
 
Развитие решений безопасности Juniper
Развитие решений безопасности JuniperРазвитие решений безопасности Juniper
Развитие решений безопасности JuniperSergii Liventsev
 

Plus de Sergii Liventsev (12)

Multiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNAT
Multiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNATMultiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNAT
Multiservices MPCs ( MS-MPCs) and Multiservices MICs (MS-MICs) CGNAT
 
Allot Communication MobileTrends Report 02/2013
Allot Communication MobileTrends Report 02/2013Allot Communication MobileTrends Report 02/2013
Allot Communication MobileTrends Report 02/2013
 
Juniper for Enterprise
Juniper for EnterpriseJuniper for Enterprise
Juniper for Enterprise
 
Решения по безопасности Juniper
Решения по безопасности JuniperРешения по безопасности Juniper
Решения по безопасности Juniper
 
Mykonos лучший способ защитить ваш web сайт и web-приложение от атак
Mykonos лучший способ защитить ваш web сайт и web-приложение от атакMykonos лучший способ защитить ваш web сайт и web-приложение от атак
Mykonos лучший способ защитить ваш web сайт и web-приложение от атак
 
Juniper Content Delivery Network
Juniper Content Delivery NetworkJuniper Content Delivery Network
Juniper Content Delivery Network
 
SMPLS\ACX
SMPLS\ACX SMPLS\ACX
SMPLS\ACX
 
Juniper scalable NAT-solution
Juniper scalable NAT-solutionJuniper scalable NAT-solution
Juniper scalable NAT-solution
 
Развитие решений безопасности Juniper
Развитие решений безопасности JuniperРазвитие решений безопасности Juniper
Развитие решений безопасности Juniper
 
Juniper Wi-Fi
Juniper Wi-FiJuniper Wi-Fi
Juniper Wi-Fi
 
Juniper QFabric
Juniper QFabricJuniper QFabric
Juniper QFabric
 
Juniper innovations
Juniper innovationsJuniper innovations
Juniper innovations
 

Juniper Networks MX Series 3D with Junos Trio Chipset - EANTC Report

  • 1. Juniper Networks MX Series 3D with Junos Trio Chipset Performance, Scalability and Power Efficiency Validation Introduction EANTC Validation Highlights In September 2009, EANTC was commissioned by Juniper Networks to validate the density, scalability, Density and Scale services scale and power efficiency of the new 16-port ➜ 1.44 Tbit/s (unidirectional) 10 Gigabit Ethernet (16x10GbE) MPC line card. throughput in 1/6 rack size Additionally, the capabilities of the new Juniper ➜ 240 Gbit/s (unidirectional) IPv4 Networks® MX80 3D router were demonstrated, as Throughput and Performance and IPv6 forwarding on a single well as a set of provider edge services and capabili- line card ties, all focusing on its MX Series 3D routers. Unicast Routing Performance EANTC collaborated with Juniper’s technical ➜ 2.4 million IPv4 and 2.4 million marketing engineers in creating a detailed test plan IPv6 prefixes that was executed in Juniper’s lab in Sunnyvale, Cali- fornia. EANTC was able to independently validate the Multicast Performance new product’s capabilities prior to its public introduc- ➜ Line-rate forwarding for (64 tion and found that the line card delivered the perfor- through 1,518 bytes) on 71 mance and scalability advertised by Juniper. 10GbE ports ➜ 3,300 maximum outgoing inter- Tested Devices and Test Equipment faces on a single line card ➜ 60,000 multicast groups on The breadth of the testing and the each of the 11 10GbE ports demonstration required several topolo- gies, devices and operating system ➜ 660,000 total receiver groups configurations to be used for the project: on a single line card • MX480 3D router with six ➜ Line-rate performance for 6,000 Scalability 16x10GbE line cards running L3VPNs with 2.4 million unicast Service Junos 10.0-20090914.0 (pre-beta) VPN routes on a single line card ➜ Line-rate performance for 6,000 • MX480 3D router with a single 16x10GbE line card running Junos VPLS with 240,000a MAC 10.0-20090914.0 (pre-beta) addresses on a single line card ➜ 25.34 watts per 10GbE with Consumption • MX80 3D router running Junos line rate traffic 10.1I20090821 (pre-alpha) Power ➜ 3.38 watts per gigabit of • Functional demonstrations were throughput carried out with up to four Juniper MX Series 3D routers (MX240 3D, MX480 3D, and MX960 3D routers) with a Multiservices DPC running Junos 9.6R1.13 a. Achieved with pre-beta software in the test. Juniper Ixia’s IxNetwork 5.40 was used to conduct all testing. stated that this is not the system maximum. Page 1 of 8
  • 2. Density, Forwarding, Scale For service providers the implications of our findings are significant. Up until now, Juniper’s line cards could Two groups of tests were focused on the new accommodate up to 24 10GbE ports in the same form 16x10GbE line cards introduced by Juniper. In the factor. With the introduction of the new 16x10GbE first set of tests, we focused our attention on the Juniper line card, we see a high-performance card that packs MX480 3D router hosting a single 16x10GbE line three times the forwarding performance and four times card. We characterized the performance and scal- the port density in comparison to the previous offered ability of the new line cards for unicast (IPv4, IPv6, configurations. and a mixture of both). We also validated multicast performance, Layer 3 VPN (L3VPN), and virtual private LAN services scalability as well as routing Density, Forwarding, Scale Test Highlights performance of the new line cards. ➜ 1.44 Tbit/s unidirectional throughput Throughput Performance on a Fully Loaded ➜ Zero packet loss from 1 Billion+ pps Chassis A new line card, especially one with such impressive Multicast Performance Characteristics specifications as a 10GbE with 16 ports, requires a IP multicast plays a pivotal role in most IPTV deploy- detailed performance analysis. We started our anal- ments, financial applications, and content distribution ysis of Juniper’s 16x10GbE line card by investigating systems due to its highly efficient network resources the performance of a fully loaded MX480 3D router. usage. The tests performed in this section character- The MX480 3D router is an eight-RU (rack units) router ized the 16x10GbE line card multicast replication that can accommodate six line cards, in addition to capabilities, group capacity, and forwarding perfor- two Routing Engines (REs). We used exactly this mance for mixed unicast and multicast traffic. configuration as defined in the test plan: two RE-S- Multicast Forwarding and Performance on a 2000 Routing Engines and six 16x10GbE line cards. Fully Loaded Chassis. The most challenging multi- EANTC used standard methodology as defined in the cast test setup was an MX480 3D router with all 71 Internet Engineering Task Force (IETF) Request for ports on each of six 16x10GbE cards requesting multi- Comments (RFC) 2544 for IPv4 throughput measure- cast traffic and with only one port as the source. In this ments and RFC 5180 for IPv6 interconnected devices. setup, the majority of the traffic had to cross the back- For this test we attached all the ports on the system plane and then further replicate on the individual line (i.e., 96 10GbE ports) to the Ixia tester. Full-meshed cards. traffic between all the 96 10 GbE ports was created by the Ixia tester. Since we already knew that we In this test, we defined the first port on the system as a could expect up to 240 Gbit/s unidirectional (120 multicast source, sending to 1,200 multicast groups. Gbit/s bidirectional) traffic from each line card, we All remaining 71 ports joined all groups. We ran a configured the ports to send traffic at 75-percent line multicast throughput test for 64-, 73-, 128-, 512-, rate, intending to validate the advertised forwarding 1,024-, and 1,518-byte frames and expected to performance of the 16x10GbE MPC. Figure 1 depicts register zero frame loss for all frame sizes on all multi- a sample line card configuration. cast receiving ports. Port Groups We ran the tests with all the frame sizes for the dura- tion of 120 seconds for each test iteration and vali- 3 2 1 0 dated that the MX480 3D router, with its fully popu- lated line-card configuration, could replicate multicast 3 2 1 0 3 2 1 0 3 2 1 0 3 2 1 0 traffic at line rate with zero packet loss. The results of the test show that the new line cards are Used Traffic Port capable of high-performance multicast replication. Figure 1: Line Card Test Setup Example Sending such small frame sizes as 64 and 73 bytes requires maximum multicast replication performance For both IPv4 and IPv6 packet types, we found the from a device, and the MX480 3D router had no prob- system capable of delivering 1.44 Tbit/s of full-mesh lems delivering. For major multicast distribution centers unidirectional traffic (720 Gbit/s bidirectional) at all or extremely heavy financial trading applications, tested frame sizes (from 64 bytes to 1,518 bytes) with these results reassure that the router can fulfill their zero packet loss. needs. EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 2 of 8
  • 3. Tests Setup. The 16x10GbE line card supports 120 DSLAMs and must replicate the IPTV streams to all of Gbit/s bidirectional (240 Gbit/s unidirectional) them. For this reason, the number of outgoing inter- forwarding performance. The rest of the tests faces (OIF), multicast groups, and receivers that can described in the report focused on the 16x10GbE line be supported is a critical aspect of the new line cards. card and used 12 out of 16 ports operating at line rate. Figure 2 depicts the configuration used for the We used two 16x10GbE line cards for this test. One tests in the following sections. line card was used to source 8 multicast groups, each at 3.98 Mbit/s (roughly standard definition channel Port Groups equivalent). On the second line card, we configured 3 2 1 0 300 VLANs on 11 receiving ports, each receiving all 8 groups. With 8 groups each sending 3.98 Mbit/s of traffic in each VLAN we were expecting 31.8 Mbit/s 3 2 1 0 3 2 1 0 3 2 1 0 3 2 1 0 per VLAN. All together, with 300 VLANs (and their headers) on a 10GbE port, we observed 9.98 Gbit/s Used Traffic Port Unused Port of traffic exiting each port while each VLAN had receivers for all 8 groups. This meant that the system Full-Mesh Traffic Group was receiving 26,400 IGMP joins and replicating the 8 multicast groups to 3,300 outgoing interfaces all on Figure 2: Single Line Card Test Setup a single line card. Mixed Multicast and Unicast Forwarding. We The results met the test’s goals. We were able to verify ran an additional multicast test using a mixture of that all multicast receivers in all VLANs on all inter- multicast and unicast traffic. One port was used to faces were receiving all 8 streams with no packet loss. source multicast traffic while the rest of the ports (on a We reached 9.98 Gbit/s on the receiving interfaces single 16x10GbE line card) were sending and even when facing the most aggressive packet replica- receiving full-mesh unicast traffic. This scenario mimics tion scenario of 64-byte frames. closely real-world scenarios where multicast and unicast are sent across interfaces at the same time. The test results demonstrated that the 16x10GbE line card is highly capable at multicast replication and is We were able to verify that all interfaces could able to scale its outgoing interfaces. While the number forward 50% multicast and 50% unicast traffic for all of VLANs used were on the high side for IPTV deploy- packet sizes tested. Figure 2 depicts the results for a ments, media-rich financial applications, in which single line card. various traders are split across VLANs, could greatly benefit from such scalability seen in the test. 120.0 100.0 Multicast group scalability. The goal of the test 80.0 was to measure the number of multicast groups that an Gbit/s 60.0 MX480 3D router with two 16x10GbE line cards could support. The test goal was simple: increase the 40.0 number of multicast groups while forwarding traffic 20.0 without losing a single frame. 0.0 Bytes 64 73 128 512 1024 1518 RFC 3918, which defines multicast benchmarking methodology, dictates that to find the maximum group mcast ucast number a system can service, the number of groups be increased as long as no loss of frames is recorded on Figure 3: Mixed Multicast and Unicast the system under test. Instead of going through this Throughput tedious exercise, we asked Juniper to provide a target value of multicast groups that EANTC should validate. Outgoing Interfaces Scalability. Juniper posi- tions the MX Series platforms as aggregation and Juniper provided an impressive target value of 60,000 services edge routers. An MX Series router, therefore, multicast groups. With 11 ports, all requesting is often required to serve as the last-hop router (the 60,000 groups, we expected to register 660,000 router responsible for multicast traffic replication). For IGMP requests on the system and then forward example, in a Triple Play deployment scenario, an MX packets, at various frame sizes (from 64 to 1,518 Series router will aggregate a large number of bytes) at line rate. EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 3 of 8
  • 4. We sent the IGMP join requests on all receiver ports results met our expectations. For starters, the BGP and validated that the system indeed registered all peers remained stable for the duration of the tests. We 660,000 requests. We then started the multicast traffic were also able to send traffic at line rate for all for all groups and let traffic run for 120 seconds. We prefixes without losing a single frame. recorded zero packets loss in all test runs for all frame sizes. L3VPN scalability. We used the same hardware configuration to perform the L3VPN and virtual private This IGMP scaling figure far exceeded today’s appli- LAN service (VPLS) scalability tests. We divided the 12 cation requirements. The number of groups reached ports into two sets: 6 were configured as customer- provide ample headroom for innovative, media-rich facing and 6 as backbone-facing. On each of the six financial and transactional (stock ticker) services for customer-facing ports Juniper engineers configured high-end financial institutions in the next five to ten 1,000 VLANs, each representing a customer and years. Such high scaling values validate that the each establishing an eBGP neighbor with the tester. In MX480 3D router with the 16x10GbE line card is, total we had 6,000 emulated customer connections set from a multicast group scalability perspective, future- up for the test. Attached to the six ports we defined as proof and provides headroom for future multicast- upstream (i.e., network-facing), we used Ixia’s IxNet- intensive applications. work to emulate six provider (P) routers and six provider edge (PE) routers. Each of the emulated PE Multicast Performance Test Highlights routers was configured with the corresponding Virtual ➜ Line-rate multicast forwarding performance Routing and Forwarding (VRF) instance to match the customer attached to the downstream ports. ➜ 3,300 outgoing interfaces on a single line card ➜ 60,000 IGMP groups with a single source CE eBGP OSPF+LDP P PE ➜ 660,000 multicast receivers on a single line card CE eBGP OSPF+LDP P PE Services Scale Downstream OSPF+LDP Upstream CE eBGP P PE Forwarding performance, multicast replication, and scalability are the building blocks with which service CE eBGP OSPF+LDP P PE providers can offer services to their customers. Once we established a baseline for the performance of the CE eBGP OSPF+LDP P PE new line cards, we investigated their services scal- ability. CE eBGP OSPF+LDP P PE BGP scalability. We started by performing two throughput tests, each with an aggregate of 2.4 Emulated by Ixia IxNetwork million BGP routes: one test for IPv4 and one for IPv6. Line Card in Test Both tests used the same hardware, which included an MX480 3D router, a single 16x10GbE line card, and Figure 4: L3VPN Scalability Test Setup two Routing Engines. Each of the emulated 6,000 customers advertised 200 We configured 12 BGP peers, one for each of the 12 prefixes on each end of the VRF, which equaled to 2.4 ports tested, and exchanged 200,000 prefixes on million routes. Once all eBGP sessions were estab- every port. The IPv6 test used /64 routes. Two IPv4 lished and routes were exchanged, we performed a routes configurations were used: one with /30 routes throughput test using a common Internet mix (Imix) of and the second with a more diverse set ranging from frame sizes: 64 bytes, 576 bytes, 1,400 bytes and /24 to /29 routes. Once we established the BGP 1,470 bytes. neighbors between the tester and router ports, we The router showed no hesitation in establishing the advertised the routes and sent traffic for all prefixes. eBGP sessions and maintaining them. The 2.4 million We repeated all three tests for 64-, 73-, 128-, 512-, VRF routes were learned and installed in the 1,024- and 1,518-byte packets. The IPv6 packets forwarding table after several minutes and remained sizes were almost identical apart from the smallest unchanged for the duration of the test. We sent traffic packets, which were 82 bytes. to all VRF prefixes using the Imix described above and We expected the router to be able to forward traffic to recorded zero packet loss from all 2.6 billion packets all routes at all frame sizes for both IPv4 and IPv6. The sent. EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 4 of 8
  • 5. The test demonstrated the scalability possibilities for Network providers are free to choose between the two service providers to deploy MPLS-based L3VPNs on services and can expect the same performance Juniper‘s MX Series 3D platform with the 16x10GbE regardless of their choice. line card. The tested service density would allow a service provider to offer 6,000 20 Mbit/s VPN MX80 3D Functional Validation endpoints on a single line card. This demonstrates true Juniper introduced us to a new, 2-RU platform called economy of scale for deployments in heavily popu- the MX80 3D router. The router comes built with a lated small and medium business areas, such as New fixed Routing Engine and Forwarding Engine Base York City and Hong Kong. (FEB), as well as built-in 4x10GbE ports. The MX80 VPLS scalability. In recent years, Ethernet-based 3D router also has two pluggable Media Interface services have been experiencing growing success, Card (MIC) slots (not used in these tests). Juniper due in part to the success of the Metro Ethernet Forum explained that the MX80 3D router is targeted at (MEF). VPLS is one way to offer multipoint-to-multipoint providers interested in extending their service reach Ethernet connectivity to customers wishing to connect through a distributed service delivery model; for several sites together in a simple way. Since the example, a VPLS multi-tenant unit (MTU) platform. provider network acts as an Ethernet switch in VPLS As a member of the Juniper MX Series 3D family, the deployments, two aspects are critical for the success of MX80 3D router, with its lean form factor, is also able the service: the number of virtual switch instances to support L3VPNs and VPLS. In this functional demon- (VSIs) and MAC addresses to which the routers can stration we verified that the new platform is able to scale. provide the same two services that we tested with the This test was configured in a similar way to the L3VPN larger 8-RU MX480 3D router using the 16x10GbE test: 6 ports were defined as customer-facing, each line card. configured with 1,000 VLANs, and 6 ports were defined as network-facing. Each emulated customer was sourcing Ethernet traffic from 20 MAC addresses, which meant that in total 20,000 MAC addresses had to be learned on each customer-facing port and each network-facing port. This totaled 240,000 MAC addresses on a single line card. We repeated the same procedure and configuration Figure 5: Juniper MX80 3D that was used in the L3VPN test. Again we performed L3VPN Services. Much like the rest of the product a throughput test using the Imix previously described, family running Juniper Networks Junos® software, the and monitored for lost frames. MX80 3D router is also a capable router. We And the results did not disappoint. None of the repeated the procedure described above with the millions of frames sent in the test were lost under these MX480 3D router, reducing the number of customer- extreme traffic load and service scale conditions. facing ports to two and the network-facing ports to two. These four ports are, after all, an integrated part Services Scale Test Highlights of the device and hence users could expect to use them with the for services. ➜ 6,000 L3VPN instances on a single line card ➜ 2.4 million unique active VPN routes We configured 500 VLANs on each of the customer- facing ports and set up an eBGP neighbor in all of ➜ 6,000 BGP sessions them. Each eBGP neighbor advertised 1,000 routes, ➜ Zero packet loss which meant that in total the MX80 3D router had to support a million VPN routes. We then sent line-rate ➜ 6,000 VPLS instances on a single line card traffic to all the routes using the same Internet traffic ➜ 240,000 MAC address per line card mix defined above and verified that no frames were lost for line-rate traffic. ➜ Zero packet loss Despite its compact 2-RU footprint, the MX80 3D router was not phased by the number of BGP sessions Both VPLS and L3VPN tests showed that network it had to maintain, the packet forwarding it had to providers using Juniper’s new 16x10GbE line card process, or the packet rate. We recorded no frame can expect impressive scalability from their hardware. loss. EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 5 of 8
  • 6. VPLS Services. We subsequently tested VPLS loads that must be used while power is measured. The services on the MX80 3D router. A similar test configu- traffic loads specified are 100, 50, and 0 percent, ration was used again, this time with 250 VLANs on measured at 20-minute interval each. each of the two customer-facing ports and with 20 To accurately measure the power drawn by the router, MAC addresses on each VSI endpoint. This configura- we connected the four Power Entry Modules (PEMs) to tion totaled 500 VPLS instances and 20,000 MAC a power distribution unit, which allowed us to obtain addresses in a 2-RU form factor. Juniper informed us power usage measurements while running the test that this does not represent the system’s maximum traffic. We used Ixia’s IxNetwork to generate test MAC address capacity as we did not investigate the traffic and measured the power output while sending size of the MX80 3D MAC table. test traffic for 20 minutes. Three power draw measure- We used the same traffic mix to verify that all VPLS ments were performed, once with 720 Gbit/s full instances were able to switch frames and serve the duplex (100% load), once with 360 Gbit/s full duplex 500 emulated customers. Once again, we were not (50% load), and once with no traffic load. disappointed. The router functioned as expected, The results are ECR values that describe the amount of learning the required number of MAC addresses, and energy consumed by the device in order to move one switching all frames at line rate. gigabit of line-level data per second. We found that Both tests demonstrate Layer 2 and Layer 3 service the ECR ratings per Gbit/s was 3.38 watts and that capabilities in a compact form factor that can comfort- the average power consumption per 100-percent load ably cater to service-rich distributed PE and MTU was 2433.6 watts. deployments. MX80 3D Functional Validation Highlights Values in Watts ➜ 1,000 L3VPNs, 1 million VPN routes 2,433.6 2,320.8 ➜ 500 VPLS instances with 20,000 MAC addresses 2,180.4 ➜ Line-rate Imix forwarding for both services Power Efficiency 100% Load The amount of energy (or power) a system uses to 50% Load transport data has been a topic of growing interest in recent years. Energy efficiency translates to savings in Idle energy costs, which leads to reduction in operating costs and in some countries, to healthy government subsidies. Figure 6: Full Chassis MX480 3D Test procedures and measurement methodologies are Power Consumption defined by several industry organizations, such as the Alliance for Telecommunications Industry Solutions (ATIS) and by the Energy Consumption Rating (ECR) Initiative. The latter is a partnership between Juniper Networks, Ixia Corporation, and Lawrence Berkeley National Lab. For this test, we used the definition provided by the ECR document (version 1.0.4, November 2008). The procedure defined by the ECR consists of four steps for determining the energy consumption of a system when forwarding packets. The first step is to determine the maximum "zero-packet-loss" throughput performance of the system. This step was performed by our extensive investigation of an MX480 3D system fully loaded with 16x10GbE line cards. The next three aspects of the specification define the specific system EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 6 of 8
  • 7. Functional Demonstrations EANTC was then able to observe traffic sent between the two VPN endpoints and verify that all traffic was Juniper’s MX Series 3D product portfolio is positioned received with no packet loss. Once the VPN traffic as aggregation and edge routers in the company’s was verified, we monitored the route count on the MS- arsenal. As such, the routers must often satisfy a wide DPC and verified that upon starting to send and range of requirements that are specific to the network receive traffic from the emulated Internet access port, edge. These requirements range from subscriber all routes were registered. management to supporting metro rollouts and beyond. Juniper used this opportunity to demonstrate a range DHCP Broadband Subscriber Access to of applications based on the MX Series 3D portfolio L3VPNs. The functionality demonstrated in this test using a pre-released version of the Junos operating allows service providers to offer carrier‘s carrier system and 4x10GbE line cards. services. When service providers lack geographical reachability, they often buy access from a second NAT Services for Business VPN Customers. service provider. VPN customers that belong to the first Business customers require VPN connectivity services provider must, therefore, traverse a different network between multiple sites while often requiring Internet before reaching the service provider that is actually access. BGP/MPLS IP VPNs deliver intra-site connec- providing them with the VPN. tivity sometimes using private addressing schemes to conserve public addresses. Since these addresses are In this demonstration, DHCP subscribers (emulated by not Internet-routable, access to the Internet becomes an the Ixia tester) were attached to a router that belonged issue for business VPN customers using such to one service provider (PE1). PE1 served as a DHCP addresses. relay agent that intercepted the DHCP requests. PE1 then associated each incoming DHCP request (based Network Address Translation (NAT) is a solution for on the port) to a pre-configured user prefix and sent a this problem. NAT is a process that is often performed request to its own Radius server. The Radius server by specialized devices that modify network addresses then authenticated the request against the service in IP packets from one address space to another. NAT provider‘s database. The authentication instructed PE1 could be done on customer premises equipment (CPE) to forward the DHCP request to an L3VPN. or firewalls, but why not perform the same service by In the wholesale scenario, we validated that the DHCP the same network provider offering the VPN services? request was authenticated and the subscriber admitted Juniper set out to demonstrate just this idea: move the into the VPN. The actual IP address assignment to the NAT solution to the network and implement it on the emulated customer came from the DHCP and Radius PE router, the same router that is the business servers that belonged to the retail service provider, customer’s point of entry to the network. using the wholesaler’s access infrastructure. In this demonstration, Juniper used the Multiservices The main focus of the test was to validate the function- Dense Port Concentrator (MS-DPC) to perform the NAT ality delivered by PE1. The router had to intercept the service function. NAT service is only available with DHCP requests delivered from the 32,000 emulated stateful firewall. EANTC configured the tester to active subscribers that were configured. It then had to emulate two VPN endpoints attached to PE1 and PE2 verify that the DHCP request was allowed to enter the (as seen in the figure below), and another tester port network, receive information from the Radius server was configured as an Autonomous System Boundary regarding what to do with the DHCP request (in this Router (ASBR) advertising Internet routes to the case, forward it to the VPN), and then send the DHCP network. request further. Even though we used 32,000 concurrent active subscribers in the demonstration, the router had no Ixia IxNetwork VRF-A Ixia IxNetwork problem performing the actions described above and Load Generator Load Generator assigning VPN membership upon authentication. Juniper MX960 3D PE1 L3VPN VRF-A Juniper MX240 3D Juniper MX960 3D PE3 PE2 10 Gigabit Ethernet Figure 7: Wholesale VPN Service with NAT EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 7 of 8
  • 8. Metro Ethernet Functional Demonstrations MX960 router, in this case) with two link members each terminating different VPLS-PE routers (PE3 and Having participated in two demonstrations that were PE4). The two VPLS-PE routers used the Inter-Chassis focused on L3VPN services, we proceeded to two functional validations that were centered around metro Control Protocol (ICCP) to exchange status informa- Ethernet rollouts. Juniper demonstrated a mechanism tion, in addition to the Bidirectional Fault Detection that can rid provider networks from spanning tree (BFD) protocol. protocols and improve Ethernet resiliency, while To verify that the MC-LAG worked as a resiliency removing the requirement for backbones to learn a mechanism, we disconnected the active link bundle large number of MAC addresses. member while sending bidirectional Ethernet traffic PBB-VPLS MAC address hiding for BGP-VPLS. through the test topology. We then measured the Provider Backbone Bridge (PBB) allows service amount of frames that were lost due to our simulated providers to offer global Ethernet services for existing link failure. We used this measurement to deduce that Layer 2 islands without running into MAC address the service was only briefly interrupted and success- scalability issues. In essence, what PBB brings to the fully switched over to the alternate link member of the table is the ability to hide the MAC addresses MC-LAG without the use of any spanning tree belonging to these Layer 2 islands from the rest of the protocol. network. In addition, PBB interworking with VPLS helps In this functional demonstration EANTC validated that to further contain the MAC table size when multipoint MC-LAG delivered on Juniper’s promise of Ethernet access is extended beyond the metro with VPLS and/ resiliency without the use of spanning tree protocols. or H-VPLS. Juniper demonstrated the ability to map encapsulate Conclusions Ethernet traffic to IEEE provider backbone bridge (PBB, The capabilities enabled by the Junos Trio chipset- IEEE 802.1ah-2008) and forward the traffic into VPLS based 16x10GbE MPC line card and MX80 3D instances. router were very impressive. EANTC sees Juniper Multichassis LAG for cross-chassis resiliency. pushing new boundaries concurrently for Ethernet Using multichassis link aggregation group (MC-LAG), density, services, and subscriber scale, as well as service providers, cable companies, and enterprises power efficiency. can offer customers a cross-platform resiliency mecha- The fact that the Junos Trio chipset and the same Junos nism for their Layer 2 services. MC-LAG, combined operating system is supported across multiple product with VPLS, removes the service provider’s reliance on lines enables synergies for network operators. They spanning tree protocols while providing both resil- can now support a common set of functionalities iency and loop prevention mechanisms. throughout the core, aggregation, and edge parts of the network. About EANTC Ixia IxNetwork The European Advanced Load Generator Juniper MX960 3D Networking Test Center CE (EANTC) offers vendor-neutral VPLS network test services for manu- facturers, service providers Juniper MX240 3D PE3 Juniper MX240 3D and enterprise customers. PE4 Primary business areas include interoperability, Juniper MX960 3D Ixia IxNetwork PE2 conformance and perfor- Load Generator mance testing for IP, MPLS, 10 Gigabit Ethernet Mobile Backhaul, VoIP, Carrier Ethernet, Triple play, and IP applications. Figure 8: Multichassis LAG Test Topology EANTC AG Einsteinufer 17, 10587 Berlin, Germany Juniper demonstrated MC-LAG using the configuration info@eantc.com, http://www.eantc.com/ described in Figure 2. A link aggregation bundle was configured on the customer edge device (a Juniper V1.2, 20091026 EANTC TEST REPORT: Juniper Networks MX Series 3D with Junos Trio Chipset – Page 8 of 8