"The common use cases of Spark SQL include ad hoc analysis, logical warehouse, query federation, and ETL processing. Spark SQL also powers the other Spark libraries, including structured streaming for stream processing, MLlib for machine learning, and GraphFrame for graph-parallel computation. For boosting the speed of your Spark applications, you can perform the optimization efforts on the queries prior employing to the production systems. Spark query plans and Spark UIs provide you insight on the performance of your queries. This talk discloses how to read and tune the query plans for enhanced performance. It will also cover the major related features in the recent and upcoming releases of Apache Spark.
"
2. About Me
• Engineering Manager at Databricks
• Apache Spark Committer and PMC Member
• Previously, IBM Master Inventor
• Spark, Database Replication, Information Integration
• Ph.D. in University of Florida
• Github: gatorsmile
3. Databricks Customers Across Industries
Financial Services Healthcare & Pharma Media & Entertainment Technology
Public Sector Retail & CPG Consumer Services Energy & Industrial IoTMarketing & AdTech
Data & Analytics Services
4. DATABRICKS WORKSPACE
Databricks Delta ML Frameworks
DATABRICKS CLOUD SERVICE
DATABRICKS RUNTIME
Reliable & Scalable Simple & Integrated
Databricks Unified Analytics Platform
APIs
Jobs
Models
Notebooks
Dashboards End to end ML lifecycle
27. Cross-session SQL Cache
27
• If a query is cached in the one session, the new
queries in all the sessions might be impacted.
• Check your query plan!
32. 32
• A SQL query => multiple Spark jobs.
• - For example, broadcast exchange, shuffle
exchange, Scalar subquery.
• - External data sources: Delta Lake.
• - New adaptive query execution.
• A Spark job => A DAG
• A chain of RDD dependencies organized in a
directed acyclic graph (DAG)
36. Stages Tab
36
• How the time are spent?
• Any outlier in task execution?
• Straggler tasks?
• Skew in data size, compute time?
• Too many/few tasks (partitions)?
• Load balanced? Locality?
Tasks specific info
44. Typical Spark Performance Issues
44
The table has thousands of partitions
• Hive metastore overhead
This table can have 100s of thousands to millions of files
• File system overhead - listing takes forever!
New data is not immediately visible
• Need to invoke a command “Refresh Table” with the SQL
engine they were using
The above issues can add 10s of minutes to the response time!
45. Delta Lake + Spark
45
Scalable metadata handling @ Delta Lake
Store metadata in transaction log file instead of metastore
The table has thousands of partitions
• Zero Hive Metastore overhead
The table can have 100s of thousands to millions of files
• No file listing
New data is not immediately visible
• Delta table state is computed on read
46. How do I use Delta?
format(“parquet”) -> format(“delta”)
47. Delta Lake + Spark
47
• Full ACID transactions
• Schema management
• Data versioning and time travel
• Unified batch/streaming support
• Scalable metadata handling
• Record update and deletion
• Data expectation
Delta Lake: https://delta.io/
For details, refer to the blog
https://tinyurl.com/yxhbe2lg
48. Delta Usage Statistics
More than 1 exabyte
processed (1018 bytes)
monthly
ManufacturingPublic Sector Technology Other
Healthcare and Life Sciences Financial Services Media and Entertainment Retail, CPG, and eCommerce