An overview of the Verizon Cloud Architecture and what Xen Features are in use. This will include planned features as well. A major part is the goal of Quality of Service (QoS) in all areas. This includes CPU, Memory, Network, and Disk. Also the network is based on level 2 so is more flexible. We provide a GUI as well as an API (the GUI uses the API for all tasks). Another design goal is to correctly handle scale. We also plan to offer full access to the guest console (VGA and serial). The tool stack is distributed and runs as guests on Xen.
Enhancements to Xen/Qemu required to support seamless import of VMWare guests include: emulation of devices available in VMWare, support for VMWare tools back-door, PCI bridge emulation enhancements, as well as fine-grained control over the placement of PCI devices, CPUID, and PCI hole size.
Unraveling Multimodality with Large Language Models.pdf
XPDS14: An Overview of the Verizon Cloud Architecture - Don Slutz, Verizon
1. Xen Project
An overview of the Verizon Cloud Architecture
By
Don Slutz
2. Design Goals
Next Generation Cloud (start from scratch)
Minimal people to support cloud
Big (i.e. fully scalable)
Quality of Service
Reliability
Run any imported guest VM unchanged
Worldwide
All things can be done via API
3. Data Centers
Culpeper, VA, USA
Santa Clara, CA, USA
Denver, CO, USA
Sao Paulo, Brazil
Miami, FL, USA
London, United Kingdom
Amsterdam, Netherlands
4. Data Center POD
Internet
MRS
Router
MRS
Router
CORE CORE CORE CORE
SeaMicro SeaMicro SeaMicro SeaMicro SeaMicro SeaMicro
5. A total of 2 Managed Routing Service (MRS)
connections consisting of 4 10Gbe over single
mode fiber connections that are connected to
CORE switches 1, 2, 3, and 4.
The 4 CORE switches are connected via 10Gbe
to all SeaMicro chassis.
Up to 144 SeaMicro sm15000-op or sm15000-xn
with 64 2TB STEC solid state disk drives.
6. We have 56 Xen servers, 8 storage servers and 8
level 2 switches in each SeaMicro chassis.
All guest network traffic is sent on it's own VLAN
inside a chassis and over a (maybe different)
VLAN to the CORE switches.
All the rest of the code runs as guests on the Xen
servers.
7. Targeted Console Support
Fully support VGA console
Fully support serial console
This includes interacting with the BIOS and/or grub (or
other boot loader) during start-up.
8. Parts of VMware guest support.
Linux is not so picky (PCI devices can move)
Windows cares a lot (will cause re-activation)
Newer VMware guests are closer. Older ones need
more special device support in QEMU like LSI 1068
(sas), 1068e (sas on PCIe), and 53c1030 (older).
Also, there are the VMware network adapters: vmxnet1,
vmxnet2, and vmxnet3.
These devices also need to be supported in seabios.
9. VMware likes to make lots of VMware style PCI-bridges.
There is also an AGP-bridge that I have
not found anything on it.
VMware has been the most supported. In plan is
handling other virtualization like Hyper-V, etc.
10. From VMware's web site:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&Mechanisms to determine if software is running in a VMware virtual machine
(1009458):
Testing the CPUID hypervisor present bit and CPUID leaf 0x40000000.
Testing the virtual BIOS DMI information
Testing the hypervisor port
Since even with VMware saying you should check these in order, not all
software does; so, all three of these needed to be added. The hypervisor port is
also known as VMware tools back-door or VMware hypercall interface.
Note: Older Linux incorrectly looks for VMware not in smbios serial data (it
checks via dmi_name_in_vendors() which is several smbios fields but not
serial), new Linux does (via dmi_name_in_serial()). And VMware says to look for
"VMware-", Linux only looks for "VMware".
11. VMware has 4 PCI bridges that are not currently in QEMU:
A different model HOST bridge: 82443 (currently 82441).
An AGP PCI bridge: 82443BX/ZX/DX AGP bridge
A VMware Inc PCI bridge.
A VMware Inc PCIe bridge.
Currently hvmloader does not handle the number of PCI bridges
that VMware likes to build. 1 PCI, 32 PCIe.
The strange part is that with an AGP bridge, the VGA is not on it.
For Windows, where the PCI devices are on the PCI Bridges
matters. So we needed to enable more control of where a given
PCI device ends up. This was done by adding bus= and addr= to
vifs.
12. As part of Quality of Service (QoS) bps_rd, bps_wr, bps,
iops_rd, iops_wr, and iops were added to disks and as well as
a new top level limits= with sub options bps_rd, bps_wr,
iops_rd, and iops_wr.
We can also adjust where QEMU places default PCI devices.
VMware also changed how various PCI devices look based
on the VMware hardware version.
Another part of VMware is that you can change memory
layout by adjusting the size of the PCI (mmio) hole that is
below 4G.
13. One of the most useful of all of these is the
"VMware mouse". Since this is an absolute
position mouse, with both Linux and Windows it is
much nicer to use over slower networks.
14. An area that has not been fully investigated is the
detection and handling of multiple hypervisor
interfaces. Currently XEN has part of two:
Xen
viridian (Microsoft's HyperV).
We have added a third, VMware. So are all
presented? Only one? Some combination?