SlideShare a Scribd company logo
1 of 8
Iperf testing report
 “Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and
UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP
characteristics. Iperf reports bandwidth, delay jitter, datagram loss.”

The above is Iperf’s summary on sourceforge, Iperf is widely used as a network performance
tool, and it’s in active development, and there’s a new version Iperf3 which host on google code,
which is a new implementation from scratch, but seems it don’t support UDP yet. Iperf also has
a JAVA front GUI jperf.

Iperf is a classic and widely used tool, even it use flooding tcp and udp, which doesn’t satisfy our
needs.

The reasons why testing it are the following:

    1. We need a benchmark for our testing, which should be accurate.
    2. We need to ensure our environment works fine.
    3. Ipref is often used to produce cross traffic in bandwidth testing.

Iperf is used as a bandwidth measurement for many years by many people, so it is a good
choice.

Iperf at a first glance?
Use Iperf measure the bandwidth
Iperf has both client and server pieces, so it requires installation at both ends of the connection
you're measuring. Iperf can send both tcp and udp packets. For more information on how to use,
please refer to IPerf-The easy tutorial.

Test case: Two machine, no bandwidth throttle

TCP

Measure bi-directional bandwidth
bi-directional bandwidth are measured (using parameter ‘-r’ in client, the measurements are
taken sequentially).

Server(10.224.173.124)

D:bw_test>iperf.exe -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2821
[ ID] Interval        Transfer Bandwidth
[1880] 0.0-10.0 sec 100 MBytes 84.1 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1848] local 10.224.172.117 port 3273 connected with 10.224.172.186 port 5001
[ ID] Interval        Transfer Bandwidth
[1848] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec


Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 1 -t 10 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1828] local 10.224.172.186 port 2821 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1828] 0.0-10.0 sec 100 MBytes 83.9 Mbits/sec
[1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3273
[ ID] Interval        Transfer Bandwidth
[1944] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec

10.224.172.117 and 10.224.172.186 are in the same subnet, the theoretic bandwidth is
100Mbit/sec.

Adjust tcp window size
Now the bandwidth measured is nearly 82Mbits/sec, the tcp window size is only 8k, we can
adjust the window size to a bigger value to improve the throughput.

Server(10.224.172.117)

D:bw_test>iperf.exe -s -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2916
[ ID] Interval        Transfer Bandwidth
[1880] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1848] local 10.224.172.117 port 3275 connected with 10.224.172.186 port 5001
[ ID] Interval        Transfer Bandwidth
[1848] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec


Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 1 -t 10 -r -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1832] local 10.224.172.186 port 2916 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1832] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec
[1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3275
[ ID] Interval        Transfer Bandwidth
[1944] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec

After the adjustment of tcp windows size, Now the bandwidth measured is nearly 92.5Mbits/sec.

Use parallel tcp
Parallel tcp is supposed to improve the throughput, we use 2 parallel links below, and the
measured bandwidth is nearly 93.3Mbit/sec.

Server(10.224.173.124)

D:bw_test>iperf.exe -s -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3025
[1844] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3026
[ ID] Interval        Transfer Bandwidth
[1844] 0.0-10.2 sec 56.8 MBytes 46.9 Mbits/sec
[1880] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 3276 connected with 10.224.172.186 port 5001
[1868] local 10.224.172.117 port 3277 connected with 10.224.172.186 port 5001
[ ID] Interval        Transfer Bandwidth
[1868] 0.0-10.2 sec 56.5 MBytes 46.4 Mbits/sec
[1880] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec
[SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec



Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 2 -t 10 -r -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1812] local 10.224.172.186 port 3026 connected with 10.224.172.117 port 5001
[1828] local 10.224.172.186 port 3025 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1812] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[1828] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec
[1784] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3276
[1964] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3277
[ ID] Interval        Transfer Bandwidth
[1964] 0.0-10.2 sec 56.5 MBytes 46.5 Mbits/sec
[1784] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec
[SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec


UDP

Using udp SHOULD specify packet send bandwidth, the default value is 1Mbit/sec.

Server(10.224.172.117)

D:bw_test>iperf.exe -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
------------------------------------------------------------
[1928] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3680
[ ID] Interval        Transfer Bandwidth                Jitter Lost/Total Datagrams
[1928] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.727 ms 0/20902 (0%)


Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -t 10 -u -b 100M
------------------------------------------------------------
Client connecting to 10.224.172.117, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
------------------------------------------------------------
[1908] local 10.224.172.186 port 3680 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec
[1908] Server Report:
[1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.726 ms 0/20902 (0%)
[1908] Sent 20902 datagrams

Why lower than we expect?
The result is 24.5Mbits/sec, there’s no explanation except that this is a bug. I’ve searched for
this case and others encounter the same problem, but no answer yet. The windows version is
1.7.0, very old, the newest version is 2.0.5, and need cygwin to compile in windows, I’ll test it in
linux to see whether the problem remains.

Here’s the result, as we can see, the problem remains, but we have other clues, the lose rate is
very high, 96% when send bandwidth limitation is 100M, 49% when send bandwidth limitation is
50M. Now we can explain why the bandwidth measured using UDP can’t reach 93Mbits/sec, the
reason is, ipref has no flow control when sending udp packets, and the UDP buffer size is too
slow for the sending rate, so lots of packets are dropped, and cause the measured value much
lower than we expect.

Server (10.224.172.199)
Client (10.224.172.181)




After adjust the buffer size, the measured bandwidth is 37.4Mbits/sec-47.6Mbits/sec, much
better than before.

Server (10.224.172.199)




Client (10.224.172.181)
Even udp has such problem, we can still use it to generate cross traffic when we measure other
tools.

Coclusion
       Ipref is precise enough for used as a benchmark in our later test.
       Ipref can be used to product various kinds of cross traffic, tcp, udp, parallel traffic, bi-
       direction traffic, in specified rate(for udp, we should adjust the buffer)


moremojo!

Jeromy.Fu

More Related Content

What's hot

VoLTE Flows and CS network
VoLTE Flows and CS networkVoLTE Flows and CS network
VoLTE Flows and CS networkKarel Berkovec
 
VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)
VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)
VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)Vikas Shokeen
 
Ssv template
Ssv templateSsv template
Ssv templateVirak Sou
 
LTE Call Processing and Handover
LTE Call Processing and HandoverLTE Call Processing and Handover
LTE Call Processing and HandoverSitha Sok
 
Virtual Routing and Forwarding, (VRF-lite)
Virtual Routing and Forwarding, (VRF-lite)Virtual Routing and Forwarding, (VRF-lite)
Virtual Routing and Forwarding, (VRF-lite)NetProtocol Xpert
 
CCNP Switching Chapter 1
CCNP Switching Chapter 1CCNP Switching Chapter 1
CCNP Switching Chapter 1Chaing Ravuth
 
Big ip f5 ltm load balancing methods
Big ip f5 ltm load balancing methodsBig ip f5 ltm load balancing methods
Big ip f5 ltm load balancing methodsUtpal Sinha
 
UMTS UTRAN Call Flow
UMTS UTRAN Call FlowUMTS UTRAN Call Flow
UMTS UTRAN Call FlowSujeet Kumar
 
Concepts of 3GPP LTE.ppt
Concepts of 3GPP LTE.pptConcepts of 3GPP LTE.ppt
Concepts of 3GPP LTE.pptStefan Oprea
 
The VoLTE User Experience--Better or Worse
The VoLTE User Experience--Better or WorseThe VoLTE User Experience--Better or Worse
The VoLTE User Experience--Better or WorseSailaja Tennati
 
Drive test from a t z (part 3)-actix
Drive test from a t z (part 3)-actixDrive test from a t z (part 3)-actix
Drive test from a t z (part 3)-actixSyed Muhammad Zaidi
 
wcdma-drive-test-analysis-ppt-libre
wcdma-drive-test-analysis-ppt-librewcdma-drive-test-analysis-ppt-libre
wcdma-drive-test-analysis-ppt-libreNarcisse FOIDIENG
 
Policy control in epc
Policy control in epcPolicy control in epc
Policy control in epcInam Khosa
 
How to dimension user traffic in LTE
How to dimension user traffic in LTEHow to dimension user traffic in LTE
How to dimension user traffic in LTEAlthaf Hussain
 

What's hot (20)

VoLTE Flows and CS network
VoLTE Flows and CS networkVoLTE Flows and CS network
VoLTE Flows and CS network
 
VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)
VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)
VoWifi 03 - vowifi epdg aaa and architecture (pdf ppt)
 
Jncia junos
Jncia junosJncia junos
Jncia junos
 
Ssv template
Ssv templateSsv template
Ssv template
 
LTE Call Processing and Handover
LTE Call Processing and HandoverLTE Call Processing and Handover
LTE Call Processing and Handover
 
Virtual Routing and Forwarding, (VRF-lite)
Virtual Routing and Forwarding, (VRF-lite)Virtual Routing and Forwarding, (VRF-lite)
Virtual Routing and Forwarding, (VRF-lite)
 
EMEA Airheads- Aruba Instant AP- VPN Troubleshooting
EMEA Airheads- Aruba Instant AP-  VPN TroubleshootingEMEA Airheads- Aruba Instant AP-  VPN Troubleshooting
EMEA Airheads- Aruba Instant AP- VPN Troubleshooting
 
CCNP Switching Chapter 1
CCNP Switching Chapter 1CCNP Switching Chapter 1
CCNP Switching Chapter 1
 
Big ip f5 ltm load balancing methods
Big ip f5 ltm load balancing methodsBig ip f5 ltm load balancing methods
Big ip f5 ltm load balancing methods
 
UMTS UTRAN Call Flow
UMTS UTRAN Call FlowUMTS UTRAN Call Flow
UMTS UTRAN Call Flow
 
Concepts of 3GPP LTE.ppt
Concepts of 3GPP LTE.pptConcepts of 3GPP LTE.ppt
Concepts of 3GPP LTE.ppt
 
The VoLTE User Experience--Better or Worse
The VoLTE User Experience--Better or WorseThe VoLTE User Experience--Better or Worse
The VoLTE User Experience--Better or Worse
 
Drive test from a t z (part 3)-actix
Drive test from a t z (part 3)-actixDrive test from a t z (part 3)-actix
Drive test from a t z (part 3)-actix
 
WiFi - IEEE 802.11
WiFi - IEEE 802.11WiFi - IEEE 802.11
WiFi - IEEE 802.11
 
wcdma-drive-test-analysis-ppt-libre
wcdma-drive-test-analysis-ppt-librewcdma-drive-test-analysis-ppt-libre
wcdma-drive-test-analysis-ppt-libre
 
Policy control in epc
Policy control in epcPolicy control in epc
Policy control in epc
 
Roaming behavior and Client Troubleshooting
Roaming behavior and Client TroubleshootingRoaming behavior and Client Troubleshooting
Roaming behavior and Client Troubleshooting
 
TEMS PARAMETERS
TEMS PARAMETERSTEMS PARAMETERS
TEMS PARAMETERS
 
How to dimension user traffic in LTE
How to dimension user traffic in LTEHow to dimension user traffic in LTE
How to dimension user traffic in LTE
 
Lte tutorial
Lte tutorialLte tutorial
Lte tutorial
 

Similar to Ipref

Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8Jamil Jamil
 
cFrame framework slides
cFrame framework slidescFrame framework slides
cFrame framework slideskestasj
 
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PROIDEA
 
Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02Jamil Jamil
 
(NET404) Making Every Packet Count
(NET404) Making Every Packet Count(NET404) Making Every Packet Count
(NET404) Making Every Packet CountAmazon Web Services
 
AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)Amazon Web Services
 
205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-final205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-finalOlivier Rostaing
 
Installing Oracle Database on LDOM
Installing Oracle Database on LDOMInstalling Oracle Database on LDOM
Installing Oracle Database on LDOMPhilippe Fierens
 
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdfBRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdfssusercbaa33
 
05 module managing your network enviornment
05  module managing your network enviornment05  module managing your network enviornment
05 module managing your network enviornmentAsif
 
newtwork opnet app project
newtwork opnet app project newtwork opnet app project
newtwork opnet app project Mohamed Elagnaf
 
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Odinot Stanislas
 
DetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdfDetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdfSomnathKhamaru1
 
acn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdfacn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdfQual4
 
VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法Naoto MATSUMOTO
 
In depth understanding network security
In depth understanding network securityIn depth understanding network security
In depth understanding network securityThanawan Tuamyim
 
PyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCPyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCTatiana Al-Chueyr
 
Communication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkCommunication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkIJERA Editor
 
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdfssuserf7cd2b
 

Similar to Ipref (20)

Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8
 
cFrame framework slides
cFrame framework slidescFrame framework slides
cFrame framework slides
 
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
 
Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02
 
(NET404) Making Every Packet Count
(NET404) Making Every Packet Count(NET404) Making Every Packet Count
(NET404) Making Every Packet Count
 
AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)
 
205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-final205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-final
 
Installing Oracle Database on LDOM
Installing Oracle Database on LDOMInstalling Oracle Database on LDOM
Installing Oracle Database on LDOM
 
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdfBRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdf
 
05 module managing your network enviornment
05  module managing your network enviornment05  module managing your network enviornment
05 module managing your network enviornment
 
EMEA Airheads- Manage Devices at Branch Office (BOC)
EMEA Airheads- Manage Devices at Branch Office (BOC)EMEA Airheads- Manage Devices at Branch Office (BOC)
EMEA Airheads- Manage Devices at Branch Office (BOC)
 
newtwork opnet app project
newtwork opnet app project newtwork opnet app project
newtwork opnet app project
 
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
 
DetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdfDetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdf
 
acn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdfacn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdf
 
VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法
 
In depth understanding network security
In depth understanding network securityIn depth understanding network security
In depth understanding network security
 
PyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCPyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPC
 
Communication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkCommunication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet Network
 
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
 

Ipref

  • 1. Iperf testing report “Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.” The above is Iperf’s summary on sourceforge, Iperf is widely used as a network performance tool, and it’s in active development, and there’s a new version Iperf3 which host on google code, which is a new implementation from scratch, but seems it don’t support UDP yet. Iperf also has a JAVA front GUI jperf. Iperf is a classic and widely used tool, even it use flooding tcp and udp, which doesn’t satisfy our needs. The reasons why testing it are the following: 1. We need a benchmark for our testing, which should be accurate. 2. We need to ensure our environment works fine. 3. Ipref is often used to produce cross traffic in bandwidth testing. Iperf is used as a bandwidth measurement for many years by many people, so it is a good choice. Iperf at a first glance?
  • 2. Use Iperf measure the bandwidth Iperf has both client and server pieces, so it requires installation at both ends of the connection you're measuring. Iperf can send both tcp and udp packets. For more information on how to use, please refer to IPerf-The easy tutorial. Test case: Two machine, no bandwidth throttle TCP Measure bi-directional bandwidth bi-directional bandwidth are measured (using parameter ‘-r’ in client, the measurements are taken sequentially). Server(10.224.173.124) D:bw_test>iperf.exe -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2821
  • 3. [ ID] Interval Transfer Bandwidth [1880] 0.0-10.0 sec 100 MBytes 84.1 Mbits/sec ------------------------------------------------------------ Client connecting to 10.224.172.186, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1848] local 10.224.172.117 port 3273 connected with 10.224.172.186 port 5001 [ ID] Interval Transfer Bandwidth [1848] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -P 1 -t 10 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.224.172.117, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1828] local 10.224.172.186 port 2821 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1828] 0.0-10.0 sec 100 MBytes 83.9 Mbits/sec [1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3273 [ ID] Interval Transfer Bandwidth [1944] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec 10.224.172.117 and 10.224.172.186 are in the same subnet, the theoretic bandwidth is 100Mbit/sec. Adjust tcp window size Now the bandwidth measured is nearly 82Mbits/sec, the tcp window size is only 8k, we can adjust the window size to a bigger value to improve the throughput. Server(10.224.172.117) D:bw_test>iperf.exe -s -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2916 [ ID] Interval Transfer Bandwidth [1880] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec ------------------------------------------------------------
  • 4. Client connecting to 10.224.172.186, TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1848] local 10.224.172.117 port 3275 connected with 10.224.172.186 port 5001 [ ID] Interval Transfer Bandwidth [1848] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -P 1 -t 10 -r -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.224.172.117, TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1832] local 10.224.172.186 port 2916 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1832] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec [1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3275 [ ID] Interval Transfer Bandwidth [1944] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec After the adjustment of tcp windows size, Now the bandwidth measured is nearly 92.5Mbits/sec. Use parallel tcp Parallel tcp is supposed to improve the throughput, we use 2 parallel links below, and the measured bandwidth is nearly 93.3Mbit/sec. Server(10.224.173.124) D:bw_test>iperf.exe -s -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3025 [1844] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3026 [ ID] Interval Transfer Bandwidth [1844] 0.0-10.2 sec 56.8 MBytes 46.9 Mbits/sec [1880] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec [SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec ------------------------------------------------------------ Client connecting to 10.224.172.186, TCP port 5001
  • 5. TCP window size: 1.00 MByte ------------------------------------------------------------ [1880] local 10.224.172.117 port 3276 connected with 10.224.172.186 port 5001 [1868] local 10.224.172.117 port 3277 connected with 10.224.172.186 port 5001 [ ID] Interval Transfer Bandwidth [1868] 0.0-10.2 sec 56.5 MBytes 46.4 Mbits/sec [1880] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec [SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -P 2 -t 10 -r -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.224.172.117, TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1812] local 10.224.172.186 port 3026 connected with 10.224.172.117 port 5001 [1828] local 10.224.172.186 port 3025 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1812] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec [1828] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec [SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec [1784] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3276 [1964] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3277 [ ID] Interval Transfer Bandwidth [1964] 0.0-10.2 sec 56.5 MBytes 46.5 Mbits/sec [1784] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec [SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec UDP Using udp SHOULD specify packet send bandwidth, the default value is 1Mbit/sec. Server(10.224.172.117) D:bw_test>iperf.exe -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default)
  • 6. ------------------------------------------------------------ [1928] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3680 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1928] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.727 ms 0/20902 (0%) Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -t 10 -u -b 100M ------------------------------------------------------------ Client connecting to 10.224.172.117, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1908] local 10.224.172.186 port 3680 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec [1908] Server Report: [1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.726 ms 0/20902 (0%) [1908] Sent 20902 datagrams Why lower than we expect? The result is 24.5Mbits/sec, there’s no explanation except that this is a bug. I’ve searched for this case and others encounter the same problem, but no answer yet. The windows version is 1.7.0, very old, the newest version is 2.0.5, and need cygwin to compile in windows, I’ll test it in linux to see whether the problem remains. Here’s the result, as we can see, the problem remains, but we have other clues, the lose rate is very high, 96% when send bandwidth limitation is 100M, 49% when send bandwidth limitation is 50M. Now we can explain why the bandwidth measured using UDP can’t reach 93Mbits/sec, the reason is, ipref has no flow control when sending udp packets, and the UDP buffer size is too slow for the sending rate, so lots of packets are dropped, and cause the measured value much lower than we expect. Server (10.224.172.199)
  • 7. Client (10.224.172.181) After adjust the buffer size, the measured bandwidth is 37.4Mbits/sec-47.6Mbits/sec, much better than before. Server (10.224.172.199) Client (10.224.172.181)
  • 8. Even udp has such problem, we can still use it to generate cross traffic when we measure other tools. Coclusion Ipref is precise enough for used as a benchmark in our later test. Ipref can be used to product various kinds of cross traffic, tcp, udp, parallel traffic, bi- direction traffic, in specified rate(for udp, we should adjust the buffer) moremojo! Jeromy.Fu