Talk 1. Scaling Apache Spark on Kubernetes at Lyft
As part of this mission Lyft invests heavily in open source infrastructure and tooling. At Lyft Kubernetes has emerged as the next generation of cloud native infrastructure to support a wide variety of distributed workloads. Apache Spark at Lyft has evolved to solve both Machine Learning and large scale ETL workloads. By combining the flexibility of Kubernetes with the data processing power of Apache Spark, Lyft is able to drive ETL data processing to a different level. In this talk, We will talk about challenges the Lyft team faced and solutions they developed to support Apache Spark on Kubernetes in production and at scale. Topics Include: - Key traits of Apache Spark on Kubernetes. - Deep dive into Lyft's multi-cluster setup and operationality to handle petabytes of production data. - How Lyft extends and enhances Apache Spark to support capabilities such as Spark pod life cycle metrics and state management, resource prioritization, and queuing and throttling. - Dynamic job scale estimation and runtime dynamic job configuration. - How Lyft powers internal Data Scientists, Business Analysts, and Data Engineers via a multi-cluster setup.
Speaker: Li Gao
Li Gao is the tech lead in the cloud native spark compute initiative at Lyft. Prior to Lyft, Li worked at Salesforce, Fitbit, Marin Software, and a few startups etc. on various technical leadership positions on cloud native and hybrid cloud data platforms at scale. Besides Spark, Li has scaled and productionized other open source projects, such as Presto, Apache HBase, Apache Phoenix, Apache Kafka, Apache Airflow, Apache Hive, and Apache Cassandra.
2. Introduction
2#UnifiedAnalytics #SparkAISummit
Li Gao
Works in the Data Platform team at Lyft, currently leading
multiple Data Infra initiatives within Data Platform, including
the Spark on Kubernetes project.
Previously held tech leadership roles at Salesforce, Fitbit,
Groupon, and other startups.
4. Data Landscape
4#UnifiedAnalytics #SparkAISummit
● Batch data Ingestion and ETL
● Data Streaming
● ML platforms
● Notebooks and BI tools
● Query and Visualization
● Operational Analytics
● Data Discovery & Lineage
● ML workflow orchestration
● Cloud Platforms
9. Batch Compute Challenges
9
● 3rd Party vendor dependency limitations
● Data ETL expressed solely in SQL, and sometimes in
hard-maintain complex SQL
● Complex logic expressed in Python that hard to adopt
in SQL
● Different dependencies and versions
● Resource load balancing for heterogeneous workloads
11. How Spark can help?
11
RDB/KV
Application
s APIs
Environments
Data Sources
and Data
Sinks
12. What challenges remain?
12
● Per job custom dependencies and security context
isolation
● Multi version runtime requirements (Py3 v.s. Py2, Spark
versions)
● Still need to run on shared clusters for cost efficiency
● Mixed ML and ETL workloads
13. How Kubernetes can help?
13
Operators &
Controllers
Pods Ingress Services
Namespaces
Pods
Immutability
Event driven &
Declarative
Vibrant CNCF Community
ServiceMesh
Multi-TenancySupport
Image
Registry
15. What challenges still remain?
● Spark on k8s is still in its early days
● Single cluster scaling limit
● CRD operator choking limit
● Cluster control plane rollout pain points
● Pod churn and IP allocations throttling
● Default k8s scheduler limit
● ECR container registry reliability
15
16. Current scale
16
● 10s PB data lake
● (O) 100k batch jobs running daily
● ~ 1000s of EC2 nodes spanning multiple
clusters and AZs
● ~ 1000s of workflows running daily
17. How Lyft scales Spark on K8s
# of Clusters # of Namespaces
# of Pods
Pod Churn Rate
# of Nodes
Pod Size
Job:Pod ratio
IP Alloc Rate Limit
ECR Rate Limit
Affinity &
Isolation
QoS & Quota
Pod Scheduler
21. HA in Cluster Pool
21
Cluster 1
Cluster 2
Cluster 3
Cluster Pool A
Cluster 4
● Cluster rotation within a cluster pool
● Automated provisioning of a new cluster and (manually) add into rotation
● Throttle at lower bound when rotation in progress
22. Multiple Namespaces (Groups)
22
Pod Pod Pod
Namespace 1
Pod Pod Pod
Namespace 2
Pod Pod Pod
Namespace 3
Node A Node B Node C Node D
Role1 Role1 Role2
Max Pod Size 1 Max Pod Size 2
● Practical ~3K active pods per namespace observed
● Less preemption required when namespace isolated by quota
● Different namespaces can map different IAM roles and sidecar
configurations for security and auditing purposes
23. Pod Sharing
23
Job
Controller Spark Driver
Pod
Spark Exec
Pods
Job 2 Driver
Pod
Job 2 Exec
Pods
Job 3 Driver
Pod
Job 3 Exec
Pods
Shared Pods
Job 1
Job 4
Job 3
Job 2
AWS
S3
Dep
Dep
Dedicate & Isolated Pods
Dep
26. Pod Priority and Preemption (WIP)
26
● Priority base
preemption
● Driver pod has higher
priority than executor
pod
● Experimental
D1 D2 E1 E2 E3 E4
K8s Scheduler
D1
E5
New Pod Req
Before
D2 E5 E2 E3 E4
After
E1
Evictedhttps://github.com/kubernetes/kubernetes/issues/71486
https://github.com/kubernetes/enhancements/issues/564
27. Taints and Tolerations (WIP)
27
Node A Node B Node C Node D Node E Node F
P1 P2 P3 P4 P5 P6 P7 P7 P8 P9 P10
Controllers and Watchers Job 1 Job 2
Core Nodes (Taint) Worker Nodes (Taint)
● Other considerations: Node Labels, Node Selectors to separate GPU and CPU based
workloads
28. Mutating Admission Hooks
28
K8S API HTTP
Handler
Authn & Authz
Mutating admin
controllers
Schema
validation
validating admin
controllers
ETCD
k8s pod
scheduler
kubelet
Node
Spark Pod
Mutating admin
webhooks
validating admin
webhooks
Pod Request
kubelet
Node
Spark Pod
sidecars
config
credit: https://banzaicloud.com/blog/k8s-admission-webhooks/
29. Custom k8s Pod scheduler for batch (WIP)
Predicates
Priorities
Round
Robin
Predicates
Weight
Engine
Placement
Engine
Policies
Default k8s scheduler Dynamic Policy Driven k8s scheduler
All Active Notes
All Active Notes
30. What about ECR reliability?
30
Node 1 Node 2 Node 3
Pods Pods Pods
DaemonSet + Docker In Docker
ECR Container Images
31. Spark Job Config Overlays (DML)
31
Cluster Pool Defaults
Cluster Defaults
Spark Job User Specified Config
Cluster and Namespace Overrides
Final Spark Job Config
Config
Composer
&
Event
Watcher
Spark
Operator
37. Remaining work
● More intelligent & efficient job routing, scheduler and
parameter composer
● End-to-End serverless, self-serviceable, and user-
oriented data compute mesh
● Fine grained cost attribution
● Improved docker image distribution
● Spark 3.0 & Kubernetes v1.14+
37
38. Key Takeaways
● Apache Spark can help unify different batch data compute
use cases
● Kubernetes can help solve the dependency and multi-version
requirements using its containerized approach
● Spark on Kubernetes can scale significantly by using a multi-
cluster compute mesh approach with proper resource
isolation and scheduling techniques
● Challenges remain when running Spark on Kubernetes at
scale
38
46. What about Python functions?
46
“I want to express my processing logic in python functions with
external geo libraries (i.e. Geomesa) and interact with Hive
tables” --- Lyft data engineer
Notes de l'éditeur
Different users and usecases - ml, streaming, realtime , batch, notebooks
multiple cloud platforms
declarative predictable & repeatable
operators add extensibility
multi tenancy
container nati
ve
CNCF is a vibrant community and supports numerous projects
patch rollout/updates for crd/control plane is still evolving
pod churn - etcd/resource/ttl/ip allocation in ec2 for eg