<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: delta live tables in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/132254#M49407</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/183953"&gt;@tenzinpro&lt;/a&gt;&amp;nbsp;have you looked into the documentation Incremental-refreshes for materalised views with streaming tables yet? &lt;A href="https://docs.databricks.com/aws/en/optimizations/incremental-refresh" target="_blank"&gt;https://docs.databricks.com/aws/en/optimizations/incremental-refresh&lt;/A&gt;&amp;nbsp;.. there's a section in there which jumped out to me:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BS_THE_ANALYST_0-1758121458487.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/20042i7E4F643A18F46A73/image-size/medium?v=v2&amp;amp;px=400" role="button" title="BS_THE_ANALYST_0-1758121458487.png" alt="BS_THE_ANALYST_0-1758121458487.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;I'd suggest reading a little more in the article to see if anything you're doing is violating an incremental-refresh requirement&lt;BR /&gt;&lt;BR /&gt;All the best,&lt;BR /&gt;BS&lt;/P&gt;</description>
    <pubDate>Wed, 17 Sep 2025 15:04:54 GMT</pubDate>
    <dc:creator>BS_THE_ANALYST</dc:creator>
    <dc:date>2025-09-17T15:04:54Z</dc:date>
    <item>
      <title>delta live tables</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/131549#M49394</link>
      <description>&lt;P&gt;Hi .&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have a source table that is a delta live streaming table created using dlt.auto_cdc logic&amp;nbsp; and now i want to create another sreaming table that filters the record from that table as per the client but it also should have auto cdc logic for the incremental logic . i tried doing that using materialized view but it refreshed fully instead of incremental . so i want to created client specific table but it gives me this error .&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.streaming.StreamingQueryException: [STREAM_FAILED] Query [id = 80319c3f-5654-4089-9a10-ecea0180cf09, runId = c6d93065-56a6-443d-b834-9e767e111e12] terminated with exception: [DELTA_SOURCE_TABLE_IGNORE_CHANGES] Detected a data update (for example WRITE (Map(mode -&amp;gt; Overwrite, statsOnLoad -&amp;gt; false))) in the source table at version 3. This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option 'skipChangeCommits' to 'true'. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using DLT. If you need to handle these changes, please switch to MVs. The source table can be found at path abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/2b764114-c9ed-459a-a54a-e68c77a6f6af. SQLSTATE: XXKST&lt;BR /&gt;=== Streaming Query ===&lt;BR /&gt;Identifier: taps.india.__materialization_mat_421df95e_f7b4_46a9_a8dd_ac09e7cf071b_customer_india11_temp_1 [id = 80319c3f-5654-4089-9a10-ecea0180cf09, runId = c6d93065-56a6-443d-b834-9e767e111e12]&lt;BR /&gt;Current Start Offsets: {DeltaSource[abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/2b764114-c9ed-459a-a54a-e68c77a6f6af]: {"sourceVersion":1,"reservoirId":"b31124cc-7c53-4865-bdb5-ec0300cd6ef8","reservoirVersion":3,"index":-1,"isStartingVersion":false}}&lt;BR /&gt;Current End Offsets: {DeltaSource[abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/2b764114-c9ed-459a-a54a-e68c77a6f6af]: {"sourceVersion":1,"reservoirId":"b31124cc-7c53-4865-bdb5-ec0300cd6ef8","reservoirVersion":3,"index":-1,"isStartingVersion":false}}&lt;/P&gt;&lt;P&gt;Current State: ACTIVE&lt;BR /&gt;Thread State: RUNNABLE&lt;/P&gt;&lt;P&gt;Logical Plan:&lt;BR /&gt;~WriteToMicroBatchDataSourceV1 DeltaSink[abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/aa64aa94-44ce-4803-b1e4-63ec3537bc95], 80319c3f-5654-4089-9a10-ecea0180cf09, [path=abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/aa64aa94-44ce-4803-b1e4-63ec3537bc95, queryName=taps.india.__materialization_mat_421df95e_f7b4_46a9_a8dd_ac09e7cf071b_customer_india11_temp_1, checkpointLocation=abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/aa64aa94-44ce-4803-b1e4-63ec3537bc95/_dlt_metadata/checkpoints/taps.india.customer_india11_temp/0], Append&lt;BR /&gt;+- ~CollectMetrics pipelines.expectations.taps.india.customer_india11_temp, [count(1) AS total#1880L, count(CASE WHEN false THEN 1 END) AS dropped#1881L, count(CASE WHEN false THEN 1 END) AS allowed#1882L], 186&lt;BR /&gt;+- ~Project [customer_id#1723, name#1724, email#1725, address#1726, event_time#1727, country#1728, _rescued_data#1729]&lt;BR /&gt;+- ~StreamingExecutionRelation DeltaSource[abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/2b764114-c9ed-459a-a54a-e68c77a6f6af], [__enzyme__row__id__#1722, customer_id#1723, name#1724, email#1725, address#1726, event_time#1727, country#1728, _rescued_data#1729]&lt;/P&gt;&lt;P&gt;at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:554)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)&lt;BR /&gt;at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:295)&lt;BR /&gt;at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)&lt;BR /&gt;at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:291)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:96)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:77)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:91)&lt;BR /&gt;at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:195)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.StreamExecution.withAttributionTags(StreamExecution.scala:87)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:383)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$3(StreamExecution.scala:286)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;BR /&gt;at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$2(StreamExecution.scala:286)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;BR /&gt;at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)&lt;BR /&gt;at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)&lt;BR /&gt;at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:109)&lt;BR /&gt;at scala.util.Using$.resource(Using.scala:269)&lt;BR /&gt;at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:108)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:285)&lt;BR /&gt;com.databricks.sql.transaction.tahoe.DeltaUnsupportedOperationException: [DELTA_SOURCE_TABLE_IGNORE_CHANGES] Detected a data update (for example WRITE (Map(mode -&amp;gt; Overwrite, statsOnLoad -&amp;gt; false))) in the source table at version 3. This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option 'skipChangeCommits' to 'true'. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using DLT. If you need to handle these changes, please switch to MVs. The source table can be found at path abfss://unity-catalog-storage@dbstoragevyadqj5lvd744.dfs.core.windows.net/2998048117548069/__unitystorage/catalogs/78873f87-08d4-40bc-8256-611e6c893ef7/tables/2b764114-c9ed-459a-a54a-e68c77a6f6af.&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.DeltaErrorsBase.deltaSourceIgnoreChangesError(DeltaErrors.scala:192)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.DeltaErrorsBase.deltaSourceIgnoreChangesError$(DeltaErrors.scala:186)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.DeltaErrors$.deltaSourceIgnoreChangesError(DeltaErrors.scala:3734)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.validateCommitAndDecideSkipping(DeltaSource.scala:1140)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.$anonfun$getFileChanges$3(DeltaSource.scala:838)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.storage.ClosableIterator.processAndClose(ClosableIterator.scala:33)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.storage.ClosableIterator.processAndClose$(ClosableIterator.scala:31)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource$$anon$3.processAndClose(DeltaSource.scala:1670)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.$anonfun$getFileChanges$2(DeltaSource.scala:833)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.storage.ClosableIterator$IteratorFlatMapCloseOp$$anon$2.&amp;lt;init&amp;gt;(ClosableIterator.scala:71)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.storage.ClosableIterator$IteratorFlatMapCloseOp$.flatMapWithClose$extension(ClosableIterator.scala:68)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.filterAndIndexDeltaLogs$1(DeltaSource.scala:828)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.$anonfun$getFileChanges$5(DeltaSource.scala:861)&lt;BR /&gt;at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:583)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.getFileChanges(DeltaSource.scala:854)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.getFileChangesWithRateLimit(DeltaSource.scala:305)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.getFileChangesWithRateLimit$(DeltaSource.scala:292)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.getFileChangesWithRateLimit(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.getNextOffsetFromPreviousOffset(DeltaSource.scala:469)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.getNextOffsetFromPreviousOffset$(DeltaSource.scala:453)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.com$databricks$sql$transaction$tahoe$sources$DeltaSourceEdge$$super$getNextOffsetFromPreviousOffset(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceEdge.getNextOffsetFromPreviousOffset(DeltaSourceEdge.scala:620)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceEdge.getNextOffsetFromPreviousOffset$(DeltaSourceEdge.scala:614)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.getNextOffsetFromPreviousOffset(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.$anonfun$latestOffsetInternal$1(DeltaSource.scala:1005)&lt;BR /&gt;at scala.Option.map(Option.scala:230)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.latestOffsetInternal(DeltaSource.scala:1005)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.initLastOffsetForTriggerAvailableNow(DeltaSource.scala:275)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.initLastOffsetForTriggerAvailableNow$(DeltaSource.scala:273)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.com$databricks$sql$transaction$tahoe$sources$DeltaSourceEdge$$super$initLastOffsetForTriggerAvailableNow(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceEdge.initLastOffsetForTriggerAvailableNow(DeltaSourceEdge.scala:846)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceEdge.initLastOffsetForTriggerAvailableNow$(DeltaSourceEdge.scala:844)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.initLastOffsetForTriggerAvailableNow(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.initForTriggerAvailableNowIfNeeded(DeltaSource.scala:269)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.initForTriggerAvailableNowIfNeeded$(DeltaSource.scala:265)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.initForTriggerAvailableNowIfNeeded(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.$anonfun$latestOffset$1(DeltaSource.scala:997)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.withOperationTypeTag(DeltaLogging.scala:325)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.withOperationTypeTag$(DeltaLogging.scala:312)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.withOperationTypeTag(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperationInternal$2(DeltaLogging.scala:178)&lt;BR /&gt;at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:418)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:416)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.recordFrameProfile(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperationInternal$1(DeltaLogging.scala:177)&lt;BR /&gt;at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:508)&lt;BR /&gt;at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:613)&lt;BR /&gt;at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:636)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)&lt;BR /&gt;at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:295)&lt;BR /&gt;at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)&lt;BR /&gt;at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:291)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:96)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:77)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:608)&lt;BR /&gt;at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:517)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.recordOperationWithResultTags(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:509)&lt;BR /&gt;at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:475)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:87)&lt;BR /&gt;at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:173)&lt;BR /&gt;at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:78)&lt;BR /&gt;at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:65)&lt;BR /&gt;at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:132)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:537)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:516)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.recordOperation(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperationInternal(DeltaLogging.scala:176)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation(DeltaLogging.scala:166)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation$(DeltaLogging.scala:155)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.recordDeltaOperation(DeltaSource.scala:751)&lt;BR /&gt;at com.databricks.sql.transaction.tahoe.sources.DeltaSource.latestOffset(DeltaSource.scala:995)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$4(MicroBatchExecution.scala:1091)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.ProgressContext.reportTimeTaken(ProgressReporter.scala:328)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$2(MicroBatchExecution.scala:1089)&lt;BR /&gt;at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)&lt;BR /&gt;at scala.collection.Iterator.foreach(Iterator.scala:943)&lt;BR /&gt;at scala.collection.Iterator.foreach$(Iterator.scala:943)&lt;BR /&gt;at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)&lt;BR /&gt;at scala.collection.IterableLike.foreach(IterableLike.scala:74)&lt;BR /&gt;at scala.collection.IterableLike.foreach$(IterableLike.scala:73)&lt;BR /&gt;at scala.collection.AbstractIterable.foreach(Iterable.scala:56)&lt;BR /&gt;at scala.collection.TraversableLike.map(TraversableLike.scala:286)&lt;BR /&gt;at scala.collection.TraversableLike.map$(TraversableLike.scala:279)&lt;BR /&gt;at scala.collection.AbstractTraversable.map(Traversable.scala:108)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$1(MicroBatchExecution.scala:1072)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:1877)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.constructNextBatch(MicroBatchExecution.scala:1068)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MultiBatchRollbackSupport.constructNextBatchWithRollbackHandling(MultiBatchRollbackSupport.scala:144)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MultiBatchRollbackSupport.constructNextBatchWithRollbackHandling$(MultiBatchRollbackSupport.scala:132)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.constructNextBatchWithRollbackHandling(MicroBatchExecution.scala:78)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$executeOneBatch$4(MicroBatchExecution.scala:726)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.ProgressContext.reportTimeTaken(ProgressReporter.scala:328)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$executeOneBatch$3(MicroBatchExecution.scala:705)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)&lt;BR /&gt;at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:295)&lt;BR /&gt;at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)&lt;BR /&gt;at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:291)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:96)&lt;BR /&gt;at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:77)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:30)&lt;BR /&gt;at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:91)&lt;BR /&gt;at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:195)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668)&lt;BR /&gt;at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.StreamExecution.withAttributionTags(StreamExecution.scala:87)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.executeOneBatch(MicroBatchExecution.scala:699)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStreamWithListener$1(MicroBatchExecution.scala:660)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStreamWithListener$1$adapted(MicroBatchExecution.scala:660)&lt;BR /&gt;at org.apache.spark.sql.execution.streaming.ConcurrentExecutor.$anonfun$runOneBatch$4(TriggerExecutor.scala:675)&lt;BR /&gt;at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:157)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;BR /&gt;at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)&lt;BR /&gt;at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:113)&lt;BR /&gt;at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)&lt;BR /&gt;at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:112)&lt;BR /&gt;at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:89)&lt;BR /&gt;at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:154)&lt;BR /&gt;at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:157)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:840)&lt;/P&gt;</description>
      <pubDate>Wed, 10 Sep 2025 15:02:58 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/131549#M49394</guid>
      <dc:creator>tenzinpro</dc:creator>
      <dc:date>2025-09-10T15:02:58Z</dc:date>
    </item>
    <item>
      <title>Re: delta live tables</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/132254#M49407</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/183953"&gt;@tenzinpro&lt;/a&gt;&amp;nbsp;have you looked into the documentation Incremental-refreshes for materalised views with streaming tables yet? &lt;A href="https://docs.databricks.com/aws/en/optimizations/incremental-refresh" target="_blank"&gt;https://docs.databricks.com/aws/en/optimizations/incremental-refresh&lt;/A&gt;&amp;nbsp;.. there's a section in there which jumped out to me:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BS_THE_ANALYST_0-1758121458487.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/20042i7E4F643A18F46A73/image-size/medium?v=v2&amp;amp;px=400" role="button" title="BS_THE_ANALYST_0-1758121458487.png" alt="BS_THE_ANALYST_0-1758121458487.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;I'd suggest reading a little more in the article to see if anything you're doing is violating an incremental-refresh requirement&lt;BR /&gt;&lt;BR /&gt;All the best,&lt;BR /&gt;BS&lt;/P&gt;</description>
      <pubDate>Wed, 17 Sep 2025 15:04:54 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/132254#M49407</guid>
      <dc:creator>BS_THE_ANALYST</dc:creator>
      <dc:date>2025-09-17T15:04:54Z</dc:date>
    </item>
    <item>
      <title>Re: delta live tables</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/132591#M49565</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/183953"&gt;@tenzinpro&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is an expected error. "&lt;SPAN&gt;DELTA_SOURCE_TABLE_IGNORE_CHANGES] Detected a data update"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;As explained in the error:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option 'skipChangeCommits' to 'true'. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using DLT. If you need to handle these changes, please switch to MV.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I recommended below options to the customer to achieve their use case:&lt;BR aria-hidden="true" /&gt;1. Define a Materialized View (MV) instead of Streaming table (ST) or&lt;BR aria-hidden="true" /&gt;2. Use skipChangeCommits to skip the changes happened on the source like updates/deletes or&lt;BR aria-hidden="true" /&gt;3. Use the source with CDF and follow the steps here:&lt;BR aria-hidden="true" /&gt;&lt;A class="c-link" href="https://community.databricks.com/t5/technical-blog/propagating-deletes-managing-data-removal-using-delta-live/ba-p/90978" target="_blank" rel="noopener noreferrer" data-stringify-link="https://community.databricks.com/t5/technical-blog/propagating-deletes-managing-data-removal-using-delta-live/ba-p/90978" data-sk="tooltip_parent"&gt;https://community.databricks.com/t5/technical-blog/propagating-deletes-managing-data-removal-using-delta-live/ba-p/90978&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;And&lt;A href="https://docs.databricks.com/aws/en/optimizations/incremental-refresh" target="_self"&gt;&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;A href="https://docs.databricks.com/aws/en/optimizations/incremental-refresh" target="_self"&gt;Incremental refresh for&amp;nbsp;materialized views&amp;nbsp;&lt;/A&gt;as suggested by&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/146924"&gt;@BS_THE_ANALYST&lt;/a&gt;&amp;nbsp;is the best way.&lt;/P&gt;</description>
      <pubDate>Fri, 19 Sep 2025 18:42:37 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-live-tables/m-p/132591#M49565</guid>
      <dc:creator>NandiniN</dc:creator>
      <dc:date>2025-09-19T18:42:37Z</dc:date>
    </item>
  </channel>
</rss>

