cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to read and optimize Physical plans in Spark to optimize for TBs and PBs of data workflows

praveenm00
Databricks Partner

One of the Amazon interviews I attended, which was for a Big data engineer asked me for this particular skill of reading and understanding physical plans in spark to optimize MASSIVE dataloads. But I though spark automatically does all these optimizations on its own with respect to optimizing plans using Adaptive query execution. Am I missing something? If so, how do I address this? Great if you folks had experience on the same and could share me some best resources.

 

Thank you!

3 REPLIES 3

suranga
Databricks Partner

Yes! AQE helps, but it does not remove the need to read physical plans. In Databricks, the practical skill is to use the plan to spot data movement, skew, sort-merge joins, and unnecessary shuffles, then decide whether to change partitioning, file layout, or join strategy; AQE and Photon can help automatically, but they still work within the shape of the query you give them.

Say in interviews
โ€œI use the physical plan to find bottlenecks, then tune the data layout and query shape so Spark can execute efficiently at TB/PB scale.โ€ AQE re-optimizes during execution based on runtime stats, but it cannot fully fix bad table design, extreme skew, or a query that reads too much data in the first place.

Short resources
For practical reading, start with Databricks AQE docs, Spark SQL tuning docs, and Databricks docs on Photon and Unity Catalog pipelines; those cover the exact runtime behavior youโ€™d discuss in interviews.

SudhansuPatra
Databricks Partner

To handle PB-scale data, the most common "killer" is Data Skew. This happens when one join key (like a null value or a "Power User" ID) has millions of rows while others have only a few.

Even with Sparkโ€™s optimization, one executor will get buried while the others sit idle. The solution is Salting.

1. The Problem: Standard Join
In a standard join, Spark hashes the join key. If ID: 101 appears 1 billion times, all those rows go to one partition.

2. The Fix: Salting Technique
We manually break the "heavy" key into smaller pieces by adding a random "salt" (a suffix).

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @praveenm00 ,  good question, and you're right that AQE handles a lot automatically. But understanding physical plans is still worth the investment, especially at TB/PB scale, because AQE works within constraints. It can't fix a bad query structure, misconfigured settings, or unnecessary shuffles baked into your data model. The plan tells you what Spark actually decided to do, which is where any real tuning starts.

How to read the plan:

Use EXPLAIN in SQL or .explain() on a DataFrame. The variants worth knowing:

  • EXPLAIN EXTENDED -- shows parsed, analyzed, optimized, and physical plans
  • EXPLAIN FORMATTED -- cleaner output, easier to navigate
  • EXPLAIN CODEGEN -- generated code, useful for CPU-level tuning

Read bottom-up. Data flows from the leaf nodes (scans) upward through transformations to the final output.

What to look for at scale:

  • Exchanges (shuffles) -- the most expensive operations. Every ShuffleExchange node is worth questioning. Can it be avoided with better partitioning or bucketing?
  • Join strategy -- SortMergeJoin is common but expensive. AQE can promote to BroadcastHashJoin at runtime if one side is small enough, but you can also force it with spark.sql.autoBroadcastJoinThreshold.
  • Scan-to-output ratio -- if you're scanning 10B rows and keeping 1M, you want those filters pushed down or your data repartitioned.
  • Partition count -- too few means skew and OOM risk, too many means scheduling overhead. AQE coalesces at runtime, but the starting point still matters.
  • Skew detection -- look for SortMergeJoin on high-cardinality keys and confirm that spark.sql.adaptive.skewJoin.enabled is actually triggering, not just enabled.

Resources worth checking:

  • Databricks docs on AQE -- covers what it handles and what it doesn't
  • The Spark UI SQL tab -- the visual DAG is much easier to navigate than raw EXPLAIN output
  • "High Performance Spark" by Holden Karau -- still the best deep dive on this

The short version: AQE is a safety net, not a substitute for understanding what your query is doing. At scale, the difference between a 20-minute job and a 2-hour job often comes down to one bad exchange or a skew case that AQE didn't catch. That's exactly the skill that interviewer was testing for.

Hope this helps you, Louis.