<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How to auto‑terminate DLT-managed clusters after pipeline execution? in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/145550#M11351</link>
    <description>&lt;P&gt;We have Data bricks Jobs that run a combination of &lt;STRONG&gt;DLT pipelines and &lt;STRONG&gt;notebook tasks.&lt;BR /&gt;For the notebook tasks, we use a &lt;STRONG&gt;job cluster, and we are able to auto‑terminate it after execution using auto termination: 10&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;However, the &lt;STRONG&gt;DLT-managed clusters behave differently — after the pipeline completes, the DLT cluster continues running for &lt;STRONG&gt;up to 60 minutes before shutting down. This results in unnecessary additional cost.&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Is there any way to &lt;STRONG&gt;reduce the idle time or &lt;STRONG&gt;auto‑terminate DLT-managed clusters sooner, similar to job clusters? Or any configuration available to control the DLT cluster shutdown time?&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Any guidance or recommended best practices would be appreciated.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="anusha98_0-1769611980756.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/23370i9B8D22E8E7F28D82/image-size/medium?v=v2&amp;amp;px=400" role="button" title="anusha98_0-1769611980756.png" alt="anusha98_0-1769611980756.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 28 Jan 2026 14:54:59 GMT</pubDate>
    <dc:creator>anusha98</dc:creator>
    <dc:date>2026-01-28T14:54:59Z</dc:date>
    <item>
      <title>How to auto‑terminate DLT-managed clusters after pipeline execution?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/145550#M11351</link>
      <description>&lt;P&gt;We have Data bricks Jobs that run a combination of &lt;STRONG&gt;DLT pipelines and &lt;STRONG&gt;notebook tasks.&lt;BR /&gt;For the notebook tasks, we use a &lt;STRONG&gt;job cluster, and we are able to auto‑terminate it after execution using auto termination: 10&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;However, the &lt;STRONG&gt;DLT-managed clusters behave differently — after the pipeline completes, the DLT cluster continues running for &lt;STRONG&gt;up to 60 minutes before shutting down. This results in unnecessary additional cost.&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Is there any way to &lt;STRONG&gt;reduce the idle time or &lt;STRONG&gt;auto‑terminate DLT-managed clusters sooner, similar to job clusters? Or any configuration available to control the DLT cluster shutdown time?&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Any guidance or recommended best practices would be appreciated.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="anusha98_0-1769611980756.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/23370i9B8D22E8E7F28D82/image-size/medium?v=v2&amp;amp;px=400" role="button" title="anusha98_0-1769611980756.png" alt="anusha98_0-1769611980756.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 28 Jan 2026 14:54:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/145550#M11351</guid>
      <dc:creator>anusha98</dc:creator>
      <dc:date>2026-01-28T14:54:59Z</dc:date>
    </item>
    <item>
      <title>Re: How to auto‑terminate DLT-managed clusters after pipeline execution?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/145575#M11352</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192976"&gt;@anusha98&lt;/a&gt;&amp;nbsp;, Make sure you are running the pipeline in Production mode and not Development mode.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.databricks.com/aws/en/ldp/updates#optimize-execution" target="_blank"&gt;https://docs.databricks.com/aws/en/ldp/updates#optimize-execution&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 28 Jan 2026 16:28:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/145575#M11352</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2026-01-28T16:28:52Z</dc:date>
    </item>
    <item>
      <title>Hi @anusha98, The behavior you are seeing, where the clus...</title>
      <link>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/150307#M11513</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192976"&gt;@anusha98&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;The behavior you are seeing, where the cluster stays running for a long time after pipeline completion, is almost certainly because your pipeline is running in Development mode. In Development mode, the default cluster shutdown delay is 2 hours. In Production mode, the default is 0 seconds, meaning the cluster terminates immediately after the pipeline update finishes.&lt;/P&gt;
&lt;P&gt;Here is how to address this:&lt;/P&gt;
&lt;P&gt;OPTION 1: SWITCH TO PRODUCTION MODE&lt;/P&gt;
&lt;P&gt;If your pipeline is ready for production workloads (scheduled runs, no active debugging), toggle Development mode off in the pipeline settings UI, or set it in your pipeline JSON configuration:&lt;/P&gt;
&lt;PRE&gt;"development": false&lt;/PRE&gt;
&lt;P&gt;In Production mode, the cluster shuts down with 0 seconds of delay by default, which is what you want for cost optimization.&lt;/P&gt;
&lt;P&gt;OPTION 2: CONFIGURE THE SHUTDOWN DELAY EXPLICITLY&lt;/P&gt;
&lt;P&gt;If you need to stay in Development mode (for faster iteration, cluster reuse between updates, etc.) but want to reduce the idle time, use the pipelines.clusterShutdown.delay configuration parameter. You can set this in the pipeline configuration section:&lt;/P&gt;
&lt;PRE&gt;{
  "configuration": {
    "pipelines.clusterShutdown.delay": "60s"
  }
}&lt;/PRE&gt;
&lt;P&gt;This tells the pipeline to shut down the cluster 60 seconds after the update completes, instead of the default 2 hours. You can set it to any value that makes sense for your workflow, even "0s" if you want immediate shutdown.&lt;/P&gt;
&lt;P&gt;WHY THIS IS DIFFERENT FROM JOB CLUSTER AUTO-TERMINATION&lt;/P&gt;
&lt;P&gt;Lakeflow Spark Declarative Pipelines (SDP), previously known as DLT, manage their own compute lifecycle. You cannot use the standard autotermination_minutes setting from cluster policies or job cluster configs. Attempting to set autotermination_minutes in a compute policy for an SDP pipeline will result in an error. The pipelines.clusterShutdown.delay setting is the correct and only mechanism for controlling this behavior.&lt;/P&gt;
&lt;P&gt;QUICK SUMMARY&lt;/P&gt;
&lt;PRE&gt;Development mode default shutdown delay: 2 hours
Production mode default shutdown delay: 0 seconds
Custom override: pipelines.clusterShutdown.delay (e.g., "60s", "0s", "5m")&lt;/PRE&gt;
&lt;P&gt;DOCUMENTATION REFERENCES&lt;/P&gt;
&lt;P&gt;Configure compute for a pipeline:&lt;BR /&gt;
&lt;A href="https://docs.databricks.com/en/delta-live-tables/configure-compute.html" target="_blank"&gt;https://docs.databricks.com/en/delta-live-tables/configure-compute.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Pipeline settings:&lt;BR /&gt;
&lt;A href="https://docs.databricks.com/en/delta-live-tables/settings.html" target="_blank"&gt;https://docs.databricks.com/en/delta-live-tables/settings.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Run a pipeline update:&lt;BR /&gt;
&lt;A href="https://docs.databricks.com/en/delta-live-tables/updates.html" target="_blank"&gt;https://docs.databricks.com/en/delta-live-tables/updates.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Note: "DLT" (Delta Live Tables) has been renamed to Lakeflow Spark Declarative Pipelines (SDP). The configuration parameters and behavior remain the same.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Mar 2026 03:53:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/how-to-auto-terminate-dlt-managed-clusters-after-pipeline/m-p/150307#M11513</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-09T03:53:27Z</dc:date>
    </item>
  </channel>
</rss>

