3 weeks ago
I have a batch job that runs thousands of Deep Clone commands, it uses a ForEach task to run multiple Deep Clones in parallel. It was taking a very long time and I realized that the Driver was the main culprit since it was using up all of its memory a few minutes into the job. I increased the driver size and used a node type with a lot more memory. That improved performance significantly but the driver would still inevitably run out of memory and hit the same bottleneck, even now that it had 128GB of RAM.
You can see the incremental increase in memory utilization as the job progresses here:
By the end of the job, the Driver is using over 122GB of RAM, which seems excessive when all it's doing is running SQL Deep Clone commands without collecting any data.
What could cause so much bloat in this situation? And is there a way to avoid this from the start, or catch and remedy it during the job?
3 weeks ago
Hi @tsam ,
I think your problem might be caused by the fact that each call "CREATE OR REPLACE TABLE ... DEEP CLONE" accumulates state on the driver even though you're not collecting data.
The main culprits are:
1. Spark Plan / Query Plan Caching Every SQL command generates a logical and physical plan that Spark caches in memory. With thousands of Deep Clone commands, these plans pile up and never get garbage collected during the job. Deep Clone plans are particularly heavy because they contain full table metadata, file listings, and schema information for both source and target.
2. Spark Listener Event Queue The Spark UI event log and listener accumulate SparkListenerEvent objects for every completed query - stage info, task metrics, SQL execution details. Thousands of clones means thousands of events sitting in the driver's heap.
3. Delta Log State Each Deep Clone reads the Delta transaction log of the source table. The driver holds onto DeltaLog snapshot objects, and Delta's internal log cache can grow very large across thousands of distinct tables.
To mitigate this issue you can take following approach. Batch and restart the SparkSession periodically. This should be quite effective approach - chunk your clone list into batches (say 50–100 tables) and between batches, clear accumulated state:
from pyspark.sql import SparkSession
def run_deep_clones(table_list, batch_size=50):
for i in range(0, len(table_list), batch_size):
batch = table_list[i : i + batch_size]
for table in batch:
spark.sql(f"CREATE OR REPLACE TABLE {table['target']} DEEP CLONE {table['source']}")
# Force cleanup between batches
spark.catalog.clearCache()
spark._jvm.System.gc() # Suggest JVM GC
print(f"Completed batch {i // batch_size + 1}, "
f"{min(i + batch_size, len(table_list))}/{len(table_list)} tables done")
3 weeks ago
Hi @tsam ,
Can you share few details:
In parallel, a simple fix that I can suggest is to run it on the most recent DBR version.
Best regards,
2 weeks ago
Hi @aleksandra_ch,
Here are the details you asked for:
3 weeks ago
You’re seeing (a monotonic / stair‑step climb in driver RAM over thousands of DEEP CLONE statements) is a very common pattern when the driver is not “holding data”, but holding metadata, query artifacts, and per‑command state that accumulates faster than the JVM can reclaim.
Even a “pure SQL DDL/DML” workload can bloat the driver because the driver is the control plane for:
parsing/analysis and query planning
tracking Spark SQL executions
holding session/catalog metadata
tracking job/stage/task events + SQL UI state
caching file indexes / transaction log snapshots
retaining objects due to references (listeners, accumulators, progress reporters)
Reduce concurrency
Deep clones are heavy metadata + file movement operations. Running “too many” in parallel may overload the driver and also S3/API listing limits.
Try:
cut ForEach concurrency by 50–80%
aim for a concurrency level where driver memory flattens instead of rising
You’ll often get better overall throughput because you avoid driver GC thrash and retries.
Prefer SHALLOW CLONE + async copy when it meets your requirements
If your use case is “replicate table structure quickly” and you don’t immediately need independent data copies, shallow clone is dramatically cheaper, since it copies metadata and references source files rather than copying all data.
Then later:
convert to deep copy only for the subset that truly needs it, or
use other replication patterns.
If doing deep clones for migration, consider alternative approaches
Depending on why you’re cloning (migration, environment refresh, etc.), alternatives can reduce driver overhead:
CTAS/CREATE TABLE AS SELECT (heavier compute but sometimes more stable)
incremental copy strategies (if source is changing)
external replication tools/workflows