More Related Content Similar to SR-IOV benchmark (20) SR-IOV benchmark1. Public
SR-IOV in
numbers
• Intel®82599 network
controller
Tommy Värre
Niilo Minkkinen
Software Architect
Senior Software Developer
Tieto,
Tieto,
tommy.varre@tieto.com niilo.minkkinen@tieto.com
© Tieto Corporation
2. Public
SR-IOV and Virtio
• Single Root I/O Virtualization
• Direct hardware access to pcie card. Hypervisor used only for
interrupts. All data is copied through DMA
• Intel’s network card has a L2 switch that is used to route traffic
between vm that are in same host
• Virtio
• Virtio emulates network hardware. Hypervisor is used for interrupts
and all data is copied through hypervisor. All data is routed through
hypervisor
• It will be very intresting to see how MR-IOV will do in future
• Multi Root I/O Virtualization: Multible compute nodes can share one
pcie card
2
© Tieto Corporation
2014-02-11
3. Public
Test setup
• OpenStack is used to launch virtual machines
• Control node , 2x compute nodes (i7 4 + 4 cores )
• All vm’s using 1 core, 768MB memory and 10GB disk
• Network card has Intel® 82599 10 Gigabit Ethernet
Controller (Fiber SFP’s 2x10GB) Using only one 10GB port
• All virtual machines have about 30% of payload ’in idle
mode’
• TCP window size was fixed to 8.0 - 16.0 KByte
• Iperf is used to generate network traffic, no limits
• All 20 vm are running all the time. Needed amount using
network. –This gives some background load to server
• In first case traffic is between compute nodes
• In second cases traffic is inside a compute node
3
© Tieto Corporation
2014-02-11
4. Public
Summary
• These tests where run with ’off-the-shelf’ enviroment and software
• Using basic drivers and no extra tuning of the system
• SR-IOV network is about 10-15% faster that Virtio when traffic is
going to out side network.
• Virtio seems to equal when using only few vm’s in same compute
node. Eg. Two virtual machines are sending network traffic between
them in same server. (there was all 20 vm runing, but only few using
network)
• But when there is more vm’s and cpu needs to schedul more. SR-IOV
seems to be bit better.
• How to get more performance (fine tuning the enviroment)
• Fine tune scheduling, use of DPDK, differend packet sizes, limit
number of wm’s, tuning network drivers, etc…
4
© Tieto Corporation
2014-02-11
8. Public
Traffic between 2 vm in same
compute node
MB/s
300
250
200
150
Virtio
SR-IOV
100
50
0
10
20
30
40
50
60
70
80
90
Virtio is faster, because SR-IOV data is routed on PCIe card
8
© Tieto Corporation
2014-02-11
100
9. MB/s
130
Virtio 1
Virtio 2
Virtio 3
Virtio 4
Virtio 5
SR-IOV 1
SR-IOV 2
SR-IOV 3
SR-IOV 4
SR-IOV 5
120
110
100
90
80
70
60
50
10
20
30
40
50
60
70
80
90
100
When more vm’s scheduling and other things come in picture and
SR-IOV and Virtio are very close to each other
9
© Tieto Corporation
2014-02-11
Public
Traffic between 10 vm in same
compute node (vm<->vm *5)
Editor's Notes SR-IOV is about 10% faster