2. INTELCONFIDENTIAL
Havana Juno Kilo Mitaka
2
Host CPU feature request, PCI pass-through & SR-IOV Support
NUMA awareness and placement
Hugepage support, CPU pinning, NUMA locality of PCI devices, OVDS+DPDK (Separate agent),
CPU threading policies, Security Groups for OVS+DPDK (Stateless),
Telemetry Capture (via collectd),
OVS+DPDK (Merged with OVS agent), OVS+DPDK controlled by ODL,
OVF Meta-Data Import
Intel
Contributions
3. INTELCONFIDENTIAL
CPU pinning
By default guest vCPUs will be allowed to
freely float across host pCPUs
Bind vCPU used by guest to pCPU.
Guest gets a dedicated pCPU for more
deterministic performance.
The Kilo release of OpenStack added CPU
pinning capability.
http://openstack-in-production.blogspot.com/2015/08/numa-and-cpu-pinning-in-high-throughput.html
4. INTELCONFIDENTIAL
CPU thread policy 1/2
When running workloads on SMT hosts, it is important to be aware of the impact that
thread siblings can have.
Thread siblings share a number of components and contention on these components
can impact performance.
6. INTELCONFIDENTIAL
Summary
Isolate model
• Is the most powerful in terms of predictable compute capacity increase. 2x cores is really 2x more
cores.
• Highest performance apps may need this to help with SLA compliance.
• Possible lower efficient compute capacity.
Prefer model
Drives up platform density by aiming to pack CPUs.
Good for platform utilisation rates
Unpredictable in terms of additional compute capacity that will be added.
SLAs to be rigorously monitored via telemetry.
Require
Almost as good a Prefer to drive up platform density by aiming to pack CPUs.
Good for platform utilisation rates
More predictable (than prefer) in terms of additional compute capacity that will be added.
SLAs to be rigorously monitored via telemetry
6