Hi, @prasad95 Thank you for sharing your concern here.
In addition to the @Retired_mod comments you can follow below To capture Change Data (CDC) from DynamoDB Streams and write it into a Delta table in Databricks:
1. Connect to DynamoDB Streams and read the CDC data using the AWS SDK.
2. Process the CDC data in Databricks using the APPLY CHANGES
API in Delta Live Tables, which is designed to correctly process CDC records.
3. Use the APPLY CHANGES INTO
statement with the APPLY CHANGES
API. An example of its usage is:
CREATE OR REFRESH STREAMING TABLE table_name;
APPLY CHANGES INTO LIVE.table_name FROM source KEYS (keys)
[IGNORE NULL UPDATES]
[APPLY AS DELETE WHEN condition]
[APPLY AS TRUNCATE WHEN condition]
SEQUENCE BY orderByColumn
[COLUMNS {columnList | * EXCEPT (exceptColumnList)}]
[STORED AS {SCD TYPE 1 | SCD TYPE 2}]
[TRACK HISTORY ON {columnList | * EXCEPT (exceptColumnList)}]
In this statement, source
is the CDC data from DynamoDB Streams, and table_name
is the Delta table where you want to write the CDC data.
4. After executing this statement, the CDC data from DynamoDB Streams is written into the Delta table in Databricks. Remember to define unique keys for each row in the source data. If you want to track history on certain columns, use the TRACK HISTORY ON
clause.
You can go through the below links to understand more about this.
https://docs.databricks.com/en/delta-live-tables/cdc.html#how-is-cdc-implemented-with-delta-live-tab...
DLT with SQL reference:- https://docs.databricks.com/en/delta-live-tables/sql-ref.html
Please leave a like if it is helpful. Follow-ups are appreciated.
Kudos,
Sai Kumar