- 8758 Views
- 6 replies
- 3 kudos
Hello Databricks community,I'm working on a pipeline and would like to implement a common use case using Delta Live Tables. The pipeline should include the following steps:Incrementally load data from Table A as a batch.If the pipeline has previously...
- 8758 Views
- 6 replies
- 3 kudos
Latest Reply
I totally agree that this is a gap in the Databricks solution. This gap exists between a static read and real time streaming. My problem (and suspect there are many use cases) is that I have slowly changing data coming into structured folders via ...
5 More Replies
by
Zara
• New Contributor II
- 2140 Views
- 2 replies
- 3 kudos
I want to load incremental data to the delta live table, I wrote function to load data for 10 tables, every time that I run the pipe line, some tables are empty and have a schema, and when I run again, the other tables are empty and the previous tabl...
- 2140 Views
- 2 replies
- 3 kudos
Latest Reply
@zahra Jalilpour How the DLT tables and views are updated depends on the update type:Refresh all: All live tables are updated to reflect the current state of their input data sources. For all streaming tables, new rows are appended to the table.Full...
1 More Replies
- 1567 Views
- 0 replies
- 0 kudos
I am trying to run an incremental data processing job using python wheel.The job is scheduled to run e.g. every hour.For my code to know what data increment to process, I inject it with the {{start_time}} as part of the command line, like so["end_dat...
- 1567 Views
- 0 replies
- 0 kudos