4. Linux Kernel 4.2
Linux “Container” Host
Containers 101
Linux distro = THE Linux kernel + management,
and user-space tools
– i.e. libraries, additional software, docs, etc
• A container image specifies a base set of
tools/libs/sw
– Dockerfile
Linux Kernel 4.2
Management & User-space Tools
(Libraries, Additional Software, & Docs)
OS config/Application SW
App
Process 1
App
Process 2
App
Process n
Container 1
Standard Linux Host
Docker
Engine
Photon OS
Tools,
Libs, SW
Dockerfile = Image Config
Photon OS
Container n
Tools,
Libs, SW
5. Dichotomy: Dev/Ops have different “cares”
Developers Like Ops Needs
Portable Fast Light Secure Network Data
Persistence
Consistent
Management
Ability to move
Dev Test Prod
Rapid start
times
&
Control
Minimal
Configuraiton
and footprint
Meet
security
standards
Hook into
existing
network
Access to the
“state” of the
app
Single
pane of
glass
6. Developers and Ops Divide
Containers
IN DEVELOPMENT
Containers
IN PRODUCTION
8. Container Technology & VMware
Photon OS
VMware Linux
Distribution
Container Host
Optimized for
vSphere, AWS, GCE
vSphere Integrated
Containers
Virtual Container
Host
Docker API Endpoint
Container Visibility &
Operations
Photon Platform
Container Optimized
Cloud Platform
Multi-Tenant / High
Scale
Kubernetes as a
Service
New Feature New Platform
10. Where in the stack?
Physical Infrastructure
Virtualized Infrastructure
IaaS
SW Development
Platform Services
Docker Endpoint
Virtual Container Host
Net|Sec|Ops Visibility
https://github.com/vmware/vic
11. Container
Registry
CONTAINER MANAGEMENT PORTAL
vm vm vm
vm vm vm
vm vm vm
vm vm vm
vm vm vm
vRealize Suite
vSphere Integrated Containers
VCH1
Container API
Endpoint
VIC Engine
VCH2
Container API
Endpoint
VIC Engine
C-VM C-VM C-VM
C-VM C-VM C-VM
C-VM C-VM C-VM
C-VM C-VM C-VM
13. vSphere
The Value Proposition of vSphere Integrated Containers
• Run in the same vSphere environment as
VMs
• Virtual Container Hosts backed by a
resource pool
• Resources can be dynamically
added/removed
• NSX micro segmentation and networking
• vCenter operations work with containers like
they do with VMs (DRS, Host Evac, etc)
• Ecosystem tools available for VMs can be
used with containers (vRops)
CCC
Photon OS
Kernel
Photon
OS
Kernel
Photon OS
Kernel
Virtual Container Host
Container Engine
Docker
API
Resource Pool
50 Ghz, 512GB
Resource Pool
75 Ghz, 768GB
16. VCH
Container Endpoint
vSphere Integrated Containers – Operating Model
ESXi ESXi ESXi ESXi ESXi
VSAN
vCenter Server
NSX
C-VM
Container VM
nginx process
Linux Kernel
vic-machine-linux create
docker run –d –p 80:80 nginx
ESXi ESXiESXi
vSphere Cluster
C-VM
VM VM
VM VM
17. The Virtual Container Host (VCH)
• It’s a collection of vSphere compute resources wrapped in a vApp construct
• Upon deployment, the VCH includes a “Docker API end-point VM”
• This is the endpoint that users use to communicate via Docker CLI
• The VCH vApp will include all containerVMs instantiated via docker run
• vSphere Integrated Container has multi-tenancy built in
• A single ESXi host can have n VCHs on it each of which with different resources
18. VIC Engine Requirements
• Download VIC Engine on the Client Machine
– Enter below command from your terminal
• wget https://registry.corp.local:9443/vic_1.1.1.tar.gz
• tar -zxvf vic_1.1.1.tar.gz
• DRS has to be enabled on the vSphere Cluster
• vNetwork Distributed Switch is required
– Create L2 (Logical Switch) isolated dPG for Containers-VCH communication. A unique, isolated
network is needed for each VCH (with NSX, VXLAN can be used for isolation).
– Create Logical Switch for containers external connectivity with Internet connectivity. DHCP could be
used (e.g. with NSX Edge). The External Network could be shared between multiple VCHs
• Open TCP 2377 Outgoing in each ESXi Host
– Use vic-machine update firewall command
• Example:
./vic-machine-linux update firewall --target vcsa-01a.corp.local --user
administrator@corp.local --compute-resource RegionA01-COMP01 --allow
19. Installation of the vSphere Container Host (VCH)
• Run vic-machine command from Client Machine to create VCH vApp in the vSphere cluster.
– Example:
./vic-machine-linux create --target vcsa-01a.corp.local --user administrator@corp.local --compute-resource
RegionA01-COMP01 --image-store RegionA01-ISCSI01-COMP01 --volume-store RegionA01-ISCSI01-COMP01:default --public-
network VM-RegionA01-vDS-COMP --public-network-ip 192.168.100.22/24 --public-network-gateway 192.168.100.1 --dns-
server 192.168.110.10 --container-network VM-RegionA01-vDS-COMP:routable --bridge-network Bridge01-RegionA01-vDS-
COMP --name virtual-container-host --registry-ca=/etc/docker/certs.d/registry.corp.local/ca.crt --no-tls
• Add the option --container-network if you want to connect containers to a network other
than the bridged Network (recommended)
• All Components to be consumed later by Docker Client have to be identified during VCH installation
• Command result
– Installer completed successfully
20. VIC Engine packaging
VIC Engine comes with a set of assets that can “inject” VCHs into a vSphere setup
• vic-machine is the CLI that creates Virtual Container Hosts
• Available for Linux | Windows | Mac
• appliance.iso is the ISO each VCH end-point VMs will boot from
• VCH end-point VMs are stateless and only boot from an ISO
• This greatly simplifies management and upgrades
• bootstrap.iso is the ISO used as the “just enough kernel” for Container-VMs
• On top of this kernel VIC “layers” the docker image you want to run
• This blog has good info on C-VMs persistency (http://blog.think-v.com/?p=4302)
21. VCH Network nomenclature
VCH (vApp)
VCH
(Docker Endpoint VM)
Bridge
NetworkDocker Client
Network
vSphere Management
Network
Public
Network
• Docker Client Management Network: the network used to interact with the VCH VM via a Docker client.
• vSphere Management Network: the network used by the VCH VM and the ContainerVMs to interact with vSphere.
• Public Network: the equivalent of eth0 on a Docker host. This is the network used to expose services to the public world (via –p)
• Bridge Network(s): the equivalent of Docker0 on a Docker Host.
• Container Network(s): these are networks containers can attach to directly for inbound/outbound communications to by-pass the VCH VM
Container
Network(s)
22. VIC Networking Option 1 – Default Docker behavior
Virtual Container Host (vSphere Cluster)
VCH VM
Container VM 1
Container VM 2
Public Network
Internal
Isolated Network
172.16.0.1
172.16.0.2
10.0.1.2 (DHCP)
• Containers access through VCH VM
• Default if no Container PG is specified while creating the container
• Typical docker run –p Use Case
23. VIC Networking Option 2 – Connecting containers directly to external networks
• Containers could be attached to Container Networks Directly to avoid Single Point of Failure
– --container-network option has to be used during VCH Installation
• DHCP can be used to assign Container IP address.
• A Container could be accessed directly through its IP Address without NAT
Container Host (Resource Cluster)
VCH VM
Container VM 1
Container VM 2
External Network 1
DHCP
DHCP
10.0.1.2 (DHCP)
Container Network 1
Container Network 2
• Typical docker run -–
network Use Case
• Container networks
displayed with docker
network ls
• Look up DHCP IP docker
inspect
24. Storage components
• Image Store (--image-store)
• The only storage related mandatory parameter
• The datastore where VCHs and Docker images get saved
• Docker images gets saved in a folder named “VIC” under the VCH folder.
• The --image-store option supports specifying a folder (eg datastore_name/folder_name)
• If you do so the VIC folder gets moved inside the folder_name and the VCH folder remains in the
root
• The --image-store option supports being shared among different VCHs
• When using the same folder_name different namespaces get created to avoid racing conditions
• Volume Store (--volume-store)
• The --volume-store option supports specifying a folder (eg datastore_name/folder_name)
• It requires a label to be specified for later reference by the docker cli
• The --volume-store option supports being shared among different VCHs
• Best practice: specify a folder name
28. Container Provisioning from Templates
• Different registries can be used
with Project Admiral
• Docker compose import /
export support is available
• Containers can be provisioned
from images or templates
• vSphere Integrated Containers
(VIC) provisioning also
supported
30. Basic End-User Commands
• Set up DOCKER_HOST environment variable
– export DOCKER_HOST=192.168.100.22:2375
• Run a docker image from DockerHub (Internet)
– docker run busybox date
31. Basic End-User Commands (cont’d)
• Run a docker image from the private registry
– docker run registry.corp.local/myproject/busybox:1.26 date
• Login to private registry
– docker login registry.corp.local
32. Basic End-User Commands (cont’d)
• Creating a docker volume (for data persistence)
– docker volume create --opt Capacity=10GB --name registrycache
• Volume gets created as
a VMDK
33. Advanced End-User Commands
• Self-provision a docker daemon
– docker run –v registrycache:/var/lib/docker –-net external –d
vmware/dinv:latest –-tls –r registry.corp.local
• Find IP Address of newly created docker daemon
– docker inspect <DOCKER_ID> | grep IPAddress
34. Advanced End-User Commands – Registry (cont’d)
• Tag an image
– docker –H 192.168.100.128:2375 tag 00f017a8c2a6
registry.corp.local/myproject/busybox:1.26
• Push image to private registry
– docker –H 192.168.100.128:2375 push
registry.corp.local/myproject/busybox:1.26
35. Advanced End-User Commands – Registry (cont’d)
• Note role-based access
controls in Private Registry
– testguest user is authorized
to pull only
– testdev user is authorized to
push & pull
– User membership/role fully
configurable per project
– Authentication against AD is
available but out-of-scope for
the POC
37. vSphere Integrated Containers: SDDC Integrations
• We bring the following Capabilities to Container Management:
Storage and
Availability
Compute
Network
and Security
• Auto Load Balancing across multiple Container Hosts
• Scale and manage Docker Containers without Service Disruption
• Portable and persistent Storage for Docker Containers
• Virtualized Networking and security (NSX) for Container-based Applications
• Micro-Segmentation - isolating traffic flow from one container to another
Intelligent
Operations
• Balance Workloads across multiple Container Hosts using existing
Management Tools
38. Call to Action
Try it out
HOL-1730-USE-1 - vSphere Integrated Containers
Getting Started with vSphere Integrated Containers
https://vmware.github.io/vic/assets/files/html/vic_installation/index.html
Visit us on Github
https://vmware.github.io/vic-product/
https://github.com/vmware/vic
https://github.com/vmware/harbor
https://github.com/vmware/admiral
Runtime Isolation: Configurable resource limits
Runtime Isolation: Ports reconfig even by thirtparty or legacy software
No packet dependency hell. Use different versions of PHP, perl, rubby, npm.. whatever on same host...
Integrate deployment of third-party or legacy software in your standard Docker deployment
Profit from that by unified container boundaries (Logging, monitoring, backup)
Easier participate in cloud. As soon you package to standard container and deploy to cloud, you profic from colud features you have (e.g. hot migration, automatic backup, autoscaling... and so on).
Deploy entire software stack (E.g. DB, engine, web) as one docker image. Good idea sometimes.
Easier to start everything you need on your laptop
A lot of predefined containers for every kind of third party software out there.
No distribution borders. Run everything for linux kernel on any distribution.
Many of the compatibility issues tha texist aren’t kerenel related, they are more about the toosl/librarys/SW that are what make of a distro
# A basic apache server. To use either add or bind mount content under /var/www FROM ubuntu:12.04 MAINTAINER Kimbro Staken version: 0.1 RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/* ENV APACHE_RUN_USER www-data ENV APACHE_RUN_GROUP www-data ENV APACHE_LOG_DIR /var/log/apache2 EXPOSE 80 CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
Much like the VM abstracted the complexity of HW, containers abstract the complexity of package management for a unique linux OS distro
Fast to start, gain control over the environment
No need to wait for someone to spin up a VM
Lightweight both in terms of footprint and configuration
Ops
Networking & Security in containers can be complex
This is what causes divisions between dev and ops teams. Developers expect their apps to run the same way in production as on their laptops. IT Ops on the other hand has to damage control when something breaks in the production environment.
VIC allows developers to keep the same container interface while allowing vSphere admins to leverage the same infrastructure & tools.
Q: What does the consumer care about?
Q: What does the Provider care about?
VIC allows developers to keep the same container interface while allowing vSphere admins to leverage the same infrastructure & tools.
Components of vSphere Integrated Containers include: VIC engine, Registry, and Management Portal. We have already talked about the VIC engine. Now, we’ll go through the management portal and registry.
vSphere 6.0 or 6.5 are supported as of 11-29-16
Ops visibility, troubleshooting, and security are difficult and/or foreign
VIC offers the same consumption model for the SW developer, but with the operational tools of VMs