<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Driver memory utilization grows continuously during job in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/driver-memory-utilization-grows-continuously-during-job/m-p/154352#M54086</link>
    <description>&lt;P&gt;I have a batch job that runs thousands of Deep Clone commands, it uses a ForEach task to run multiple Deep Clones in parallel. It was taking a very long time and I realized that the Driver was the main culprit since it was using up all of its memory a few minutes into the job. I increased the driver size and used a node type with a lot more memory. That improved performance significantly but the driver would still inevitably run out of memory and hit the same bottleneck, even now that it had 128GB of RAM.&lt;/P&gt;&lt;P&gt;You can see the incremental increase in memory utilization as the job progresses here:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="tsam_2-1776095245905.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/25946i23158E7ADE3DBD93/image-size/medium?v=v2&amp;amp;px=400" role="button" title="tsam_2-1776095245905.png" alt="tsam_2-1776095245905.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;By the end of the job, the Driver is using over 122GB of RAM, which seems excessive when all it's doing is running SQL Deep Clone commands without collecting any data.&lt;/P&gt;&lt;P&gt;What could cause so much bloat in this situation? And is there a way to avoid this from the start, or catch and remedy it during the job?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 13 Apr 2026 15:58:32 GMT</pubDate>
    <dc:creator>tsam</dc:creator>
    <dc:date>2026-04-13T15:58:32Z</dc:date>
    <item>
      <title>Driver memory utilization grows continuously during job</title>
      <link>https://community.databricks.com/t5/data-engineering/driver-memory-utilization-grows-continuously-during-job/m-p/154352#M54086</link>
      <description>&lt;P&gt;I have a batch job that runs thousands of Deep Clone commands, it uses a ForEach task to run multiple Deep Clones in parallel. It was taking a very long time and I realized that the Driver was the main culprit since it was using up all of its memory a few minutes into the job. I increased the driver size and used a node type with a lot more memory. That improved performance significantly but the driver would still inevitably run out of memory and hit the same bottleneck, even now that it had 128GB of RAM.&lt;/P&gt;&lt;P&gt;You can see the incremental increase in memory utilization as the job progresses here:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="tsam_2-1776095245905.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/25946i23158E7ADE3DBD93/image-size/medium?v=v2&amp;amp;px=400" role="button" title="tsam_2-1776095245905.png" alt="tsam_2-1776095245905.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;By the end of the job, the Driver is using over 122GB of RAM, which seems excessive when all it's doing is running SQL Deep Clone commands without collecting any data.&lt;/P&gt;&lt;P&gt;What could cause so much bloat in this situation? And is there a way to avoid this from the start, or catch and remedy it during the job?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 15:58:32 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/driver-memory-utilization-grows-continuously-during-job/m-p/154352#M54086</guid>
      <dc:creator>tsam</dc:creator>
      <dc:date>2026-04-13T15:58:32Z</dc:date>
    </item>
    <item>
      <title>Re: Driver memory utilization grows continuously during job</title>
      <link>https://community.databricks.com/t5/data-engineering/driver-memory-utilization-grows-continuously-during-job/m-p/154375#M54091</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/226856"&gt;@tsam&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;I think your problem might be caused by the fact that each call "CREATE OR REPLACE TABLE ... DEEP CLONE" accumulates state on the driver even though you're not collecting data.&lt;/P&gt;&lt;P&gt;The main culprits are:&lt;/P&gt;&lt;P&gt;1. &lt;STRONG&gt;Spark Plan / Query Plan Caching Every SQL&lt;/STRONG&gt; command generates a logical and physical plan that Spark caches in memory. With thousands of Deep Clone commands, these plans pile up and never get garbage collected during the job. Deep Clone plans are particularly heavy because they contain full table metadata, file listings, and schema information for both source and target.&lt;/P&gt;&lt;P&gt;2. &lt;STRONG&gt;Spark Listener Event Queue&lt;/STRONG&gt; The Spark UI event log and listener accumulate SparkListenerEvent objects for every completed query - stage info, task metrics, SQL execution details. Thousands of clones means thousands of events sitting in the driver's heap.&lt;/P&gt;&lt;P&gt;3. &lt;STRONG&gt;Delta Log State Each Deep Clone&lt;/STRONG&gt; reads the Delta transaction log of the source table. The driver holds onto DeltaLog snapshot objects, and Delta's internal log cache can grow very large across thousands of distinct tables.&amp;nbsp;&lt;/P&gt;&lt;P&gt;To mitigate this issue you can take following approach.&amp;nbsp;&lt;STRONG&gt;Batch and restart the SparkSession periodically.&lt;/STRONG&gt; This should be quite effective approach - chunk your clone list into batches (say 50–100 tables) and between batches, clear accumulated state:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;from pyspark.sql import SparkSession

def run_deep_clones(table_list, batch_size=50):
    for i in range(0, len(table_list), batch_size):
        batch = table_list[i : i + batch_size]
        
        for table in batch:
            spark.sql(f"CREATE OR REPLACE TABLE {table['target']} DEEP CLONE {table['source']}")
        
        # Force cleanup between batches
        spark.catalog.clearCache()
        spark._jvm.System.gc()  # Suggest JVM GC
        
        print(f"Completed batch {i // batch_size + 1}, "
              f"{min(i + batch_size, len(table_list))}/{len(table_list)} tables done")&lt;/LI-CODE&gt;</description>
      <pubDate>Mon, 13 Apr 2026 19:16:36 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/driver-memory-utilization-grows-continuously-during-job/m-p/154375#M54091</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2026-04-13T19:16:36Z</dc:date>
    </item>
  </channel>
</rss>

