<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Query does finish on serverless but will not on classic in Warehousing &amp; Analytics</title>
    <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143468#M2450</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/23233"&gt;@NandiniN&lt;/a&gt;&amp;nbsp;thanks for you response,&lt;BR /&gt;&lt;BR /&gt;I tried influencing the way skewjoin is being applied and the way files are written but no results.&lt;BR /&gt;&lt;BR /&gt;This query is part of a businessvault run and is called from dbt. It is not a job run native to databricks and sadly I cannot simulate this run from a job with a dbt command as a classic wh is unavailable to use.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Ties_0-1767956963036.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/22817i935E58A09E29C91A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Ties_0-1767956963036.png" alt="Ties_0-1767956963036.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 09 Jan 2026 11:09:14 GMT</pubDate>
    <dc:creator>Ties</dc:creator>
    <dc:date>2026-01-09T11:09:14Z</dc:date>
    <item>
      <title>Query does finish on serverless but will not on classic</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143371#M2446</link>
      <description>&lt;P&gt;Dear community,&lt;BR /&gt;&lt;BR /&gt;We are running nightly businessvaults. Last year it stopped to finish on classic wh and after testing until completion when switching to serverless wh it stayed that way but costs have increased a lot. I have been testing numerous spark_conf options with the problematic query on classic wh to check if after optimization I could set it back to classic. It hangs at a 100% with no rows written indicating a commit issue. I tried materializing into view and write the table with a posthook. This will move the issue to the posthook.&lt;BR /&gt;Longer running but similar structured queries in the same schema do finish on classic.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Ties_0-1767884888474.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/22784iC05F3DFED10DE056/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Ties_0-1767884888474.png" alt="Ties_0-1767884888474.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Example image above has been indicating 100% after a few minutes but running for 17min and will run indefinitely until canceled.&lt;BR /&gt;&lt;BR /&gt;What are my options to combat this hanging state on classic wh?&lt;BR /&gt;&lt;BR /&gt;Below are a few spark_conf options I touched/used/changed while testing:&lt;BR /&gt;"spark.sql.adaptive.enabled": "true",&lt;BR /&gt;"spark.sql.adaptive.coalescePartitions.enabled": "true",&lt;BR /&gt;"spark.sql.adaptive.coalescePartitions.minPartitionSize": "134217728",&lt;BR /&gt;"spark.sql.adaptive.skewJoin.enabled": "true",&lt;BR /&gt;"spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes": 134217728,&lt;BR /&gt;"spark.sql.adaptive.skewJoin.skewedPartitionFactor": "5",&lt;BR /&gt;"spark.sql.shuffle.partitions": "64",&lt;BR /&gt;"spark.sql.autoBroadcastJoinThreshold": "-1",&lt;BR /&gt;"spark.sql.broadcastTimeout": "1200",&lt;BR /&gt;"spark.sql.network.timeout": "800s",&lt;BR /&gt;"spark.sql.execution.arrow.maxRecordsPerBatch": "50000",&lt;BR /&gt;"spark.sql.files.maxPartitionBytes": 134217728,&lt;BR /&gt;"spark.sql.adaptive.advisoryPartitionSizeInBytes": "134217728",&lt;BR /&gt;"spark.sql.adaptive.localShuffleReader.enabled": "true",&lt;BR /&gt;"spark.sql.join.preferSortMergeJoin": "true",&lt;BR /&gt;"spark.sql.optimizer.dynamicPartitionPruning.enabled": "true",&lt;BR /&gt;"spark.databricks.delta.optimizeWrite.enabled": "true",&lt;BR /&gt;"spark.databricks.delta.autoCompact.enabled": "true",&lt;BR /&gt;"spark.sql.execution.arrow.pyspark.enabled": "true",&lt;BR /&gt;"spark.sql.inMemoryColumnarStorage.compressed": "true",&lt;BR /&gt;"spark.sql.inMemoryColumnarStorage.batchSize": "10000",&lt;BR /&gt;"spark.sql.cbo.enabled": "false",&lt;BR /&gt;&lt;BR /&gt;Thanks for reading!&lt;BR /&gt;Ties&lt;/P&gt;</description>
      <pubDate>Thu, 08 Jan 2026 15:23:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143371#M2446</guid>
      <dc:creator>Ties</dc:creator>
      <dc:date>2026-01-08T15:23:02Z</dc:date>
    </item>
    <item>
      <title>Re: Query does finish on serverless but will not on classic</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143378#M2447</link>
      <description>&lt;P&gt;If you have a Support contract, this would be a good one to create a ticket for.&lt;/P&gt;
&lt;P&gt;That being said, is there a reason you need to be on classic? Its an engine that really should be considered a good starter engine but you should be using Pro or Serverless for anything where you consider performance to be a measuring stick. Have you tried the same on Pro WH?&lt;/P&gt;
&lt;P&gt;Also how are you setting these configs. SQL warehouses don't respect all configs so if you are setting this in dbt, it's possible they are being ignored or just causing some unintended effects.&lt;/P&gt;
&lt;P&gt;Also, if I'm reading this correctly you're going from 7M to 3.7B rows after an inner join? Do you have N matches for each row? It's possible that the "engine" improvements in Serverless are handling this explode much better than the classic engine.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Jan 2026 16:14:30 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143378#M2447</guid>
      <dc:creator>MoJaMa</dc:creator>
      <dc:date>2026-01-08T16:14:30Z</dc:date>
    </item>
    <item>
      <title>Re: Query does finish on serverless but will not on classic</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143379#M2448</link>
      <description>&lt;P&gt;Hey &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/204437"&gt;@Ties&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P data-path-to-node="16"&gt;When a query reaches 100% completion but fails to commit, it usually points to a &lt;STRONG data-path-to-node="0" data-index-in-node="151"&gt;metadata bottleneck&lt;/STRONG&gt;, &lt;STRONG data-path-to-node="0" data-index-in-node="172"&gt;file system contention&lt;/STRONG&gt;, or a &lt;STRONG data-path-to-node="0" data-index-in-node="201"&gt;massive data skew.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-path-to-node="16"&gt;While the job is hanging at 100%, go to the &lt;STRONG data-path-to-node="16" data-index-in-node="44"&gt;Spark UI -&amp;gt; Executors tab -&amp;gt; Driver -&amp;gt; Thread Dump&lt;/STRONG&gt;. Look for threads stuck in &lt;CODE data-path-to-node="16" data-index-in-node="122"&gt;FileCommitProtocol&lt;/CODE&gt; or &lt;CODE data-path-to-node="16" data-index-in-node="144"&gt;DeltaLog&lt;/CODE&gt;. If you see many threads waiting on S3/ADLS/GCS listing, you have a file-count problem.&lt;/P&gt;
&lt;P data-path-to-node="16"&gt;Also checking the Driver log, will give an idea on what action is stuck,&amp;nbsp;When Spark says 100% but won't finish, it’s often in the &lt;CODE data-path-to-node="5" data-index-in-node="57"&gt;Driver&lt;/CODE&gt;. But I would still ask you to go to the Spark UI and see if there are any tasks running. A screenshot of that can help check on this further.&lt;/P&gt;
&lt;P data-path-to-node="16"&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Thu, 08 Jan 2026 16:19:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143379#M2448</guid>
      <dc:creator>NandiniN</dc:creator>
      <dc:date>2026-01-08T16:19:18Z</dc:date>
    </item>
    <item>
      <title>Re: Query does finish on serverless but will not on classic</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143457#M2449</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/425"&gt;@MoJaMa&lt;/a&gt;&amp;nbsp;thanks for your response,&lt;BR /&gt;&lt;BR /&gt;The reason I want to be on classic is the costs. Serverless runs have tripled our costs sadly. I have tried the query in case on pro and it will finish in two minutes. Cost indication running only on pro do not seem to fit the budget. Therefor the choice of serverless for now.&lt;BR /&gt;&lt;BR /&gt;spark_conf dictionary you can set from within the model inside the config list and noticed those settings are being picked up. You can also set these in dbt_project.yml if you want to apply to a whole schema or just one model.&lt;BR /&gt;&lt;BR /&gt;This explode you are seeing is not intentionally. Longer running but similar structured queries in the same schema have this as well. Those finish within a few minutes. Seems like databricks optimizing/restructuring the query results in this explode for some reason. There must be some databricks logic to it I guess. It is clear that pro and serverless are better at handling this.&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jan 2026 10:16:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143457#M2449</guid>
      <dc:creator>Ties</dc:creator>
      <dc:date>2026-01-09T10:16:27Z</dc:date>
    </item>
    <item>
      <title>Re: Query does finish on serverless but will not on classic</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143468#M2450</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/23233"&gt;@NandiniN&lt;/a&gt;&amp;nbsp;thanks for you response,&lt;BR /&gt;&lt;BR /&gt;I tried influencing the way skewjoin is being applied and the way files are written but no results.&lt;BR /&gt;&lt;BR /&gt;This query is part of a businessvault run and is called from dbt. It is not a job run native to databricks and sadly I cannot simulate this run from a job with a dbt command as a classic wh is unavailable to use.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Ties_0-1767956963036.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/22817i935E58A09E29C91A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Ties_0-1767956963036.png" alt="Ties_0-1767956963036.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jan 2026 11:09:14 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/143468#M2450</guid>
      <dc:creator>Ties</dc:creator>
      <dc:date>2026-01-09T11:09:14Z</dc:date>
    </item>
    <item>
      <title>Re: Query does finish on serverless but will not on classic</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/146522#M2474</link>
      <description>&lt;P&gt;I saw that in this topic a reply was selected as a solution. Sadly this is not the case and we are still in limbo with this issue.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I tried setting up a meeting through support for databricks on azure but the third party rep microsoft provided did not show up.&lt;BR /&gt;&lt;BR /&gt;Also set the businessvault runs to run on Pro warehouse for a week to compare. Costs are slightly higher than serverless while performance is the same.&lt;/P&gt;</description>
      <pubDate>Mon, 02 Feb 2026 10:57:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/query-does-finish-on-serverless-but-will-not-on-classic/m-p/146522#M2474</guid>
      <dc:creator>Ties</dc:creator>
      <dc:date>2026-02-02T10:57:15Z</dc:date>
    </item>
  </channel>
</rss>

