This session offers techniques for securing Docker containers and hosts using open source network virtualization technologies to implement microsegmentation. Come learn real tips and tricks that you can apply to keep your production environment secure.
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Secure Your Containers: What Network Admins Should Know When Moving Into Production
1. Secure Your Containers!
What Network Admins
Should Know When Moving
Into Production Cynthia Thomas
Systems Engineer
@_techcet_
2. { Why is networking an afterthought?
Containers, Containers,
Containers!
3. Why Containers?
• Much lighter weight and less overhead than virtual
machines
• Don’t need to copy entire OS or libraries – keep track of deltas
• More efficient unit of work for cloud-native aps
• Crucial tools for rapid-scale application development
• Increase density on a physical host
• Portable container image for moving/migrating resources
4. Containers: Old and New
• LXC: operating system-level virtualization through a virtual
environment that has its own process and network space
• 8 year old technology
• Leverages Linux kernel cgroup
• Also other namespaces for isolation
• Focus on System Containers
• Security:
• Previously possible to run code on Host systems as root on guest system
• LXC 1.0 brought “unprivileged containers” for HW accessibility restrictions
• Ecosystem:
• Vendor neutral, Evolving LXD, CGManager, LXCFS
5. Containers: Old and New
• Explosive growth: Docker created a de-facto standard image format and API for
defining and interacting with containers
• Docker: also operating system-level virtualization through a virtual environment
• 3 year old technology
• Application-centric API
• Also leverages Linux kernel cgroups and kernal namespaces
• Moved from LXC to libcontainer implementation
• Portable deployment across machines
• Brings image management and more seamless updates through versioning
• Security:
• Networking: linuxbridge, IPtables
• Ecosystem:
• CoreOS, Rancher, Kubernetes
6. Container Orchestration Engines
• Step forth the management of containers for application
deployment!
• Scale applications with clusters where the underlying
deployment unit is a container
• Examples include Docker Swarm, Kubernetes, Apache Mesos
8. What’s the problem?
Why are containers insecure?
• They weren’t designed with full isolation like VMs
• Not everything in Linux is namespaced
• What do they do to the network?
9. COEs help container orchestration!
…but what about networking?
• Scaling Issues for ad-hoc security
implementation with Security/Policy
complexity
• Which networking model to choose? CNM? CNI?
• Why is network security always seemingly considered last?
10. { Your Network Security team!
And you should too.
Who’s going to care?
11. Containers add network complexity!!!
• More components
= more endpoints
• Network Scaling
Issues
• Security/Policy
complexity
12. Perimeter Security approach is not enough
• Legacy architectures
tended to put higher layer
services like Security and
FWs at the core
• Perimeter protection is
useful for north-south
flows, but what about
east-west?
• More = better? How to
manage more pinch
points?
13. #ThrowbackThursday
What did OpenStack do?
• Started in 2010 as an open source community for cloud compute
• Gained a huge following and became production ready
• Enabled collaboration amongst engineers for technology advancement
14. #ThrowbackThursday
Neutron came late in the game!
• Took 3 years before dedicated project formed
• Neutron enabled third party plugin solutions
• Formed advanced networking framework via community
15. What is Neutron?
• Production-grade open framework for Networking:
Multi-tenancy
Scalable, fault-tolerant devices (or device-
agnostic network services).
L2 isolation
L3 routing isolation
• VPC
• Like VRF (virtual routing and fwd-ing)
Scalable Gateways
Scalable control plane
• ARP, DHCP, ICMP
Floating/Elastic Ips
Decoupled from Physical Network
Stateful NAT
• Port masquerading
• DNAT
ACLs
Stateful (L4) Firewalls
• Security Groups
Load Balancing with health checks
Single Pane of Glass (API, CLI, GUI)
Integration with COEs & management platforms
• Docker Swarm, K8S
• OpenStack, CloudStack
• vSphere, RHEV, System Center
19. What is Kuryr?
Kuryr has become a collection of projects
and repositories:
- kuryr-lib: common libraries (neutron-client,
keystone-client)
- kuryr-libnetwork: docker networking plugin
- kuryr-kubernetes: k8s api watcher and CNI driver
- fuxi: docker cinder driver
20. Project Kuryr Contributions
As of Oct. 18th, 2016: http://stackalytics.com/?release=all&module=kuryr-
group&metric=commits
21. Some previous* networking options with
Docker
STOP
IPtables maybe?
IPtables maybe?
Done with Neutron? Tell me more,
please!
• libnetwork:
• Null (with nothing in its networking namespace)
• Bridge
• Overlay
• Remote
22. Kuryr: Docker (1.9+)’s remote driver
for Neutron networking
Kuryr implements a libnetwork remote network
driver and maps its calls to OpenStack Neutron.
It translates between libnetwork's Container
Network Model (CNM) and Neutron's networking
model.
Kuryr also acts as a libnetwork IPAM driver.
24. Kuryr translation please!
• Docker uses PUSH model to call a service for libnetwork
• Kuryr maps the 3 main CNM components to Neutron
networking constructs
• Ability to attach to existing Neutron networks with host
isolation (container cannot see host network)
libnetwork neutron
Network Network
Sandbox Subnet, Ports, netns
Endpoint Port
25. Networking services from Neutron, for containers!
Distributed Layer 2 Switching
Distributed Layer 3 Gateways
Floating IPs
Service Insertion
Layer 4 Distributed Stateful NAT
Distributed Firewall
VTEP Gateways
Distributed DHCP
Layer 4 Load Balancer-as-a-
Service (with Health Checks)
Policy without the need for IP tables
Distributed Metadata
TAP-as-a-Service
27. { It’s an enabler for existing, well-defined
networking plugins for containers
Kuryr delivers for CNM,
but what about CNI?
28. Kubernetes Presence in Container Orchestration
• Open sourced from production-grade, scalable technology used by
Borg & Omega at Google for over 10 years
• Explosive use over the last 12 months, including users like eBay and
Lithium Technologies
• Portable, extensible, self-healing
Impressive automated rollouts & rollbacks with one command
• Growing ecosystem supporting Kubernetes:
• CoreOS, RH OpenShift, Platform9, Weaveworks, Midokura!
30. • etcd
• All persistent master state is
stored in an instance of etcd
• To date, runs as single instance;
HA clusters in future
• Provides a “great” way to store
configuration data reliably
• With watch support,
coordinating components can
be notified very quickly of
changes
Kubernetes Control Plane
31. • K8S API Server
• Serves up the Kubernetes API
• Intended to be a CRUD-y server, with separate components or in plug-ins
for logic implementation
• Processes REST operations, validates them, and updates the corresponding
objects in etcd
• Scheduler
• Binds unscheduled pods to nodes
• Pluggable, for multiple cluster schedulers and even user-provided
schedulers in the future
• K8S Controller Manager Server
• All other cluster-level functions are currently performed by the Controller
Manager
• E.g. Endpoints objects are created and updated by the endpoints
controller; and nodes are discovered, managed, and monitored by the
node controller.
• The replicationcontroller is a mechanism that is layered on top of the
simple pod API
• Planned to be a pluggable mechanism
Kubernetes Control Plane Continued
32. • kubelet
• Manages pods and their
containers, their images, their
volumes, etc
• kube-proxy
• Run on each node to provide
a simple network proxy and
load balancer
• Reflects services as defined in
the Kubernetes API on each
node and can do simple TCP
and UDP stream forwarding
(round robin) across a set of
backends
Kubernetes Worker Node
33. Kubernetes Networking Model
There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container
communications
2. Pod-to-Pod communications
3. Pod-to-Service communications
4. External-to-internal communications
34. Kubernetes Networking Options
Flannel provides an overlay to enable cross-host communication
- IP per POD
- VXLAN tunneling between hosts
- IPtables for NAT
- Multi-tenancy?
- Host per tenant?
- Cluster per tenant?
- How to share VMs and containers on the same network for the same tenant?
- Security Risk on docker bridge? Shared networking stack
37. Security at the edge
1. vPort1 initiates a packet flow through the virtual network
2. MN Agent fetches the virtual topology/state
3. MN simulates the packet through the virtual network
4. MN installs a flow in the kernel at the ingress host
5. Packet is sent in tunnel to egress host
38. Kubernetes Integration: How with Kuryr?
Kubernetes 1.2+
Two integration components:
CNI driver
• Standard container networking: preferred K8S network extension point
• Can serve rkt, appc, docker
• Uses Kuryr port binding library to bind local pod using metadata
Raven (Part of Kuryr project)
• Python 3
• AsyncIO
• Extensible API watcher
• Drives the K8S API to Neutron API translation
39. Kubernetes Integration: How with Kuryr+MidoNet?
Defaults:
kube-proxy: generates iptables rules which map portal_ips
such that the traffic gets to the local kube-proxy daemon. Does the
equivalent of a NAT to the actual pod address
flannel: default networking integration in CoreOS
Enhanced by:
Kuryr CNI driver: enables the host binding
Raven: process used to proxy K8S API to Neutron API
MidoNet agent: provides higher layer services to the pods
40. Kubernetes Integration: How with Kuryr?
Raven: used to proxy K8S API to Neutron API + IPAM
- focuses only on building the virtual network topology translated
from the events of the internal state changes of K8S through its API
server
Kuryr CNI driver: takes care of binding virtual ports to physical
interfaces on worker nodes for deployed pods
Kubernetes API Neutron API
Namespace Network
Cluster Subnet Subnet
Pod Port
Service LBaaS Pool LBaaS VIP (FIP)
Endpoint LBaaS Pool Member
41. Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care of binding virtual ports to physical
interfaces on worker nodes for deployed pods
42. Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care of binding virtual
ports to physical interfaces on worker nodes
for deployed pods
43. Completed integration components:
- CNI driver
- Raven
- Namespace Implementation (a mechanism to partition resources created
by users into a logically named group):
- - each namespace gets its own router
- - all pods driven by the RC should be on the same logical network
CoreOS support
- Containerized MidoNet services
Kubernetes Integration: Where are we now with MidoNet?
44. Where will Kuryr go next?
• Bring container and VM networking under one API
• Multi-tenancy
• Advanced networking services/map Network Policies
• QoS
• Adapt implementation to work with other COEs
• kuryr-mesos
• kuryr-cloudfoundry
• kuryr-openshift
• Magnum Support (containers in VMs) in OpenStack
Purpose
Examples of existing ones
What are COE networking models?
Docker: CNM
K8S & Mesos: CNI
Maturity?
Re-inventing wheel, including the political battles, but that’s the fun that open source brings
- Otto’s Magnum webinar compares COEs: (minute 16:30??)
http://blog.midokura.com/2016/05/project-magnum-introduction/
Talk about which are good for what
If 10K nodes, use …
Reference: https://github.com/kubernetes/kubernetes/blob/master/docs/design/architecture.md
Service endpoints are currently found via DNS or through environment variables (both Docker-links-compatible and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.
The kubelet ships with built-in support for cAdvisor, which collects, aggregates, processes and exports information about running containers on a given system. cAdvisor includes a built-in web interface available on port 4194