<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic create_auto_cdc_from_snapshot_flow vs create_auto_cdc_flow – when is snapshot CDC actually worth it? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/144461#M52326</link>
    <description>&lt;P&gt;I am deciding between create_auto_cdc_from_snapshot_flow() and create_auto_cdc_flow() in a pipeline.&lt;/P&gt;&lt;P&gt;My source is a daily full snapshot table:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;No operation column (no insert/update/delete flags)&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;Order can be derived from snapshot_date (sequence by)&lt;/LI&gt;&lt;LI&gt;Rows are unique based on key id&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;create_auto_cdc_from_snapshot_flow() fits this model, but it requires the source lambda returning (DataFrame, snapshot_version), which feels heavy to implement compared to just producing CDC rows and using create_auto_cdc_flow().&lt;/P&gt;&lt;P&gt;So the question is:&lt;/P&gt;&lt;P&gt;For a system that only provides full daily snapshots (no row-level operations), what are the real technical advantages of using create_auto_cdc_from_snapshot_flow()?&lt;/P&gt;&lt;P&gt;Is snapshot-based AUTO CDC mainly a convenience API, or does it give better correctness, SCD2 handling, or performance guarantees than&amp;nbsp;&amp;nbsp;create_auto_cdc_flow() approach?&lt;/P&gt;</description>
    <pubDate>Mon, 19 Jan 2026 17:01:04 GMT</pubDate>
    <dc:creator>batch_bender</dc:creator>
    <dc:date>2026-01-19T17:01:04Z</dc:date>
    <item>
      <title>create_auto_cdc_from_snapshot_flow vs create_auto_cdc_flow – when is snapshot CDC actually worth it?</title>
      <link>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/144461#M52326</link>
      <description>&lt;P&gt;I am deciding between create_auto_cdc_from_snapshot_flow() and create_auto_cdc_flow() in a pipeline.&lt;/P&gt;&lt;P&gt;My source is a daily full snapshot table:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;No operation column (no insert/update/delete flags)&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;Order can be derived from snapshot_date (sequence by)&lt;/LI&gt;&lt;LI&gt;Rows are unique based on key id&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;create_auto_cdc_from_snapshot_flow() fits this model, but it requires the source lambda returning (DataFrame, snapshot_version), which feels heavy to implement compared to just producing CDC rows and using create_auto_cdc_flow().&lt;/P&gt;&lt;P&gt;So the question is:&lt;/P&gt;&lt;P&gt;For a system that only provides full daily snapshots (no row-level operations), what are the real technical advantages of using create_auto_cdc_from_snapshot_flow()?&lt;/P&gt;&lt;P&gt;Is snapshot-based AUTO CDC mainly a convenience API, or does it give better correctness, SCD2 handling, or performance guarantees than&amp;nbsp;&amp;nbsp;create_auto_cdc_flow() approach?&lt;/P&gt;</description>
      <pubDate>Mon, 19 Jan 2026 17:01:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/144461#M52326</guid>
      <dc:creator>batch_bender</dc:creator>
      <dc:date>2026-01-19T17:01:04Z</dc:date>
    </item>
    <item>
      <title>Re: create_auto_cdc_from_snapshot_flow vs create_auto_cdc_flow – when is snapshot CDC actually worth</title>
      <link>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/144682#M52371</link>
      <description>&lt;P&gt;If your source only emits full daily snapshots, create_auto_cdc_from_snapshot_flow() is purpose-built for this and will likely be simpler and safer to operate than synthesizing CDC events for create_auto_cdc_flow(). It automatically computes inserts/updates/deletes between snapshots, supports SCD1/2, enforces strict snapshot ordering, and avoids you having to model delete/truncate semantics or event sequencing.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Jan 2026 03:31:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/144682#M52371</guid>
      <dc:creator>pradeep_singh</dc:creator>
      <dc:date>2026-01-21T03:31:24Z</dc:date>
    </item>
    <item>
      <title>Re: create_auto_cdc_from_snapshot_flow vs create_auto_cdc_flow – when is snapshot CDC actually worth</title>
      <link>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/145399#M52493</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/209827"&gt;@batch_bender&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P data-path-to-node="3"&gt;For your case, I recommend using &lt;CODE data-path-to-node="3" data-index-in-node="18"&gt;create_auto_cdc_from_snapshot_flow()&lt;/CODE&gt;. Since your system provides full snapshots without row-level operation data, this is the only way to accurately generate &lt;STRONG data-path-to-node="3" data-index-in-node="176"&gt;SCD tables&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P data-path-to-node="4"&gt;&lt;STRONG data-path-to-node="4" data-index-in-node="0"&gt;How it works:&lt;/STRONG&gt; It compares the new snapshot to the target to identify changes:&lt;/P&gt;
&lt;UL data-path-to-node="5"&gt;
&lt;LI&gt;
&lt;P data-path-to-node="5,0,0"&gt;New keys → &lt;CODE data-path-to-node="5,0,0" data-index-in-node="11"&gt;INSERT&lt;/CODE&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P data-path-to-node="5,1,0"&gt;Existing keys with different values → &lt;CODE data-path-to-node="5,1,0" data-index-in-node="17"&gt;UPDATE&lt;/CODE&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P data-path-to-node="5,2,0"&gt;Keys missing from the snapshot but present in target → &lt;CODE data-path-to-node="5,2,0" data-index-in-node="15"&gt;DELETE&lt;/CODE&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-path-to-node="7"&gt;&lt;STRONG data-path-to-node="7" data-index-in-node="0"&gt;Implementation Details:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-path-to-node="7"&gt;The lambda function is necessary only if there are multiple historical snapshots in the landing zone to be processed.&amp;nbsp;&lt;/P&gt;
&lt;UL data-path-to-node="8"&gt;
&lt;LI&gt;
&lt;P data-path-to-node="8,0,0"&gt;&lt;STRONG data-path-to-node="8,0,0" data-index-in-node="0"&gt;Processing History:&lt;/STRONG&gt; If you have multiple historical snapshots in your landing zone, you'll need a &lt;A class="ng-star-inserted" href="https://docs.databricks.com/aws/en/ldp/cdc#example-historical-snapshot-processing" target="_blank" rel="noopener"&gt;lambda function&lt;/A&gt; to tell the flow how to order them.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P data-path-to-node="8,1,0"&gt;&lt;STRONG data-path-to-node="8,1,0" data-index-in-node="0"&gt;Periodic Snapshots:&lt;/STRONG&gt; If the source simply overwrites the old snapshot with a new one each day, you can just &lt;A class="ng-star-inserted" href="https://docs.databricks.com/aws/en/ldp/cdc#example-periodic-snapshot-processing" target="_blank" rel="noopener"&gt;pass the path or table name&lt;/A&gt; directly.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-path-to-node="6"&gt;&lt;STRONG data-path-to-node="6" data-index-in-node="0"&gt;Performance Note:&lt;/STRONG&gt; Because&lt;CODE data-path-to-node="3" data-index-in-node="18"&gt;create_auto_cdc_from_snapshot_flow()&lt;/CODE&gt; requires a full scan of every snapshot, it can be heavy on large datasets. If the source system eventually gains the ability to provide row-level logs (CDC), it's better to switch to &lt;CODE data-path-to-node="6" data-index-in-node="203"&gt;create_auto_cdc_flow()&lt;/CODE&gt; for better performance.&lt;/P&gt;
&lt;P data-path-to-node="7"&gt;Hope this helps!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 27 Jan 2026 14:20:48 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/145399#M52493</guid>
      <dc:creator>aleksandra_ch</dc:creator>
      <dc:date>2026-01-27T14:20:48Z</dc:date>
    </item>
    <item>
      <title>Hi @batch_bender, Given your scenario (daily full snapsho...</title>
      <link>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/150282#M53332</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/209827"&gt;@batch_bender&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Given your scenario (daily full snapshots, no operation column, ordering by snapshot_date, unique key ID), create_auto_cdc_from_snapshot_flow() is the right tool for the job, and it is more than just a convenience wrapper. Here is a breakdown of the real technical differences.&lt;/P&gt;
&lt;P&gt;WHAT SNAPSHOT CDC DOES UNDER THE HOOD&lt;/P&gt;
&lt;P&gt;When you use create_auto_cdc_from_snapshot_flow(), the Lakeflow Spark Declarative Pipelines (SDP) engine automatically compares consecutive snapshots and infers inserts, updates, and deletes for you:&lt;/P&gt;
&lt;P&gt;- A row present in the new snapshot but absent from the previous snapshot = INSERT&lt;BR /&gt;
- A row present in both snapshots but with changed non-key columns = UPDATE&lt;BR /&gt;
- A row present in the previous snapshot but absent from the new snapshot = DELETE&lt;/P&gt;
&lt;P&gt;This diffing logic is built into the SDP runtime and is optimized for this exact pattern. You do not need to write any comparison logic yourself.&lt;/P&gt;
&lt;P&gt;WHY NOT JUST BUILD CDC ROWS MANUALLY AND USE create_auto_cdc_flow()?&lt;/P&gt;
&lt;P&gt;You could, but there are several practical and correctness reasons to prefer the snapshot API for your use case:&lt;/P&gt;
&lt;P&gt;1. CORRECTNESS GUARANTEES&lt;BR /&gt;
The snapshot API handles all the edge cases in diff detection atomically within the pipeline transaction. If you build CDC rows yourself (for example, by joining today's snapshot against yesterday's to detect changes), you are responsible for getting every edge case right: null handling in comparisons, ensuring no rows are missed or double-counted, and handling partial failures. The built-in snapshot diffing avoids these pitfalls.&lt;/P&gt;
&lt;P&gt;2. SCD TYPE 2 TRACKING&lt;BR /&gt;
If you ever need SCD type 2 history, the snapshot API populates __START_AT and __END_AT columns automatically using the snapshot version as the sequence marker. With create_auto_cdc_flow(), you would need to supply your own sequence_by column and explicitly tag each row with an operation type (INSERT, UPDATE, DELETE), which you said your source does not provide.&lt;/P&gt;
&lt;P&gt;3. VERSION MANAGEMENT&lt;BR /&gt;
The snapshot API tracks which snapshot version was last processed and picks up from there on the next pipeline run. For the historical snapshot variant (callable function returning DataFrame + version), this means you get exactly-once processing with automatic bookmarking. With create_auto_cdc_flow(), you would need to manage this state yourself or rely on Structured Streaming checkpoints in your source.&lt;/P&gt;
&lt;P&gt;4. LESS CODE TO MAINTAIN&lt;BR /&gt;
With the snapshot approach, the entire pipeline can look like this:&lt;/P&gt;
&lt;PRE&gt;from pyspark import pipelines as dp

@dp.view(name="daily_snapshot")
def daily_snapshot():
  return spark.read.table("bronze.my_daily_snapshot_table")

dp.create_streaming_table("target_table")

dp.create_auto_cdc_from_snapshot_flow(
  target="target_table",
  source="daily_snapshot",
  keys=["key_id"],
  stored_as_scd_type=1
)&lt;/PRE&gt;
&lt;P&gt;For this "periodic snapshot" pattern where you read from a table or view, the source is just a string name referencing the view. The pipeline ingests the current state of that view on each update and diffs it against the previous snapshot automatically. You do not need the callable function variant (the one returning DataFrame + snapshot_version) unless you are processing historical file-based snapshots from cloud storage.&lt;/P&gt;
&lt;P&gt;WHEN THE CALLABLE FUNCTION VARIANT IS USEFUL&lt;/P&gt;
&lt;P&gt;The source parameter accepting a callable that returns (DataFrame, snapshot_version) is designed for a specific scenario: you have a series of snapshot files in cloud storage (e.g., daily exports from Oracle or MySQL) and you want to replay them in order. In that case, you iterate over the files yourself:&lt;/P&gt;
&lt;PRE&gt;def next_snapshot_and_version(latest_snapshot_version):
  latest_snapshot_version = latest_snapshot_version or 0
  next_version = latest_snapshot_version + 1
  path = f"/mnt/snapshots/daily_{next_version}.parquet"
  if file_exists(path):
      return (spark.read.parquet(path), next_version)
  return None

dp.create_auto_cdc_from_snapshot_flow(
  target="target_table",
  source=next_snapshot_and_version,
  keys=["key_id"],
  stored_as_scd_type=2
)&lt;/PRE&gt;
&lt;P&gt;If your daily snapshot is already landing as a table (or you can read it as a view), you do not need this pattern at all.&lt;/P&gt;
&lt;P&gt;WHEN create_auto_cdc_flow() IS THE BETTER CHOICE&lt;/P&gt;
&lt;P&gt;Use create_auto_cdc_flow() when your source already provides row-level change events with operation metadata (INSERT, UPDATE, DELETE flags) and a sequence column, for example, from Debezium, a database CDC connector, or Delta Change Data Feed. In that case, the data already tells you what changed, and snapshot diffing would be unnecessary overhead.&lt;/P&gt;
&lt;P&gt;SUMMARY&lt;/P&gt;
&lt;P&gt;For your daily full-snapshot source with no operation column, create_auto_cdc_from_snapshot_flow() is the intended and recommended approach. It gives you built-in diffing, automatic version tracking, and SCD type 1 or 2 support with minimal code. The "periodic snapshot" variant (source as a view/table name string) keeps the implementation simple. Reserve the callable function variant for file-based historical replay scenarios.&lt;/P&gt;
&lt;P&gt;Documentation reference:&lt;BR /&gt;
&lt;A href="https://docs.databricks.com/en/dlt/cdc.html" target="_blank"&gt;https://docs.databricks.com/en/dlt/cdc.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Note: SDP pipeline editions Pro or Advanced (or serverless) are required for the CDC APIs.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;
&lt;P&gt;If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Mar 2026 01:01:34 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/150282#M53332</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-09T01:01:34Z</dc:date>
    </item>
    <item>
      <title>Re: create_auto_cdc_from_snapshot_flow vs create_auto_cdc_flow – when is snapshot CDC actually worth</title>
      <link>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/156466#M54428</link>
      <description>&lt;P&gt;Does this work only for tables with PK. What if the source table doesnt even have PK. Does it use any type of hashing by concatenating all columns and then use that key for merge?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 08 May 2026 18:19:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/create-auto-cdc-from-snapshot-flow-vs-create-auto-cdc-flow-when/m-p/156466#M54428</guid>
      <dc:creator>manish_de</dc:creator>
      <dc:date>2026-05-08T18:19:15Z</dc:date>
    </item>
  </channel>
</rss>

