cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

UPDATE or DELETE with pipeline that needs to reprocess in DLT

dowdark
New Contributor

i'm currently trying to replicate a existing pipeline that uses standard RDBMS. No experience in DataBricks at all

I have about 4-5 tables (much like dimensions) with different events types and I want to my pipeline output a streaming table as final output to facilitate processing in another next pipeline.

My problem is that one of the tables defines the campaign of the event and each other table is events relative to that campaign. And unfortunately the way the data is uploaded to cloud the data isn't syncronized soo we can have events that aren't defined already.

The way I designed the pipeline is by segregating all the still not defined data in a table, and by each update, i'll join the segregated data again, after that I union with non segregated and tries to apply_change in target table.

The problem: after this new defining data arrives, it'll consider a update rather then a insert on the source resulting  in a error.

There's any way to write this new data as a new data rather than a update on source? I don't want the pipeline to reprocess everything since the data is quite considerably

4 REPLIES 4

Kaniz
Community Manager
Community Manager

Hi @dowdarkYou are trying to handle late-arriving data in your pipeline with Databricks.
- Delta Lake can be used for late-arriving or updating previously processed data.
- Delta Lake provides ACID transactions for complex data pipelines.


- Limitations of Delta Live Tables:


 - DML queries that modify the schema of a streaming table are not supported.
 - The LOCATION property is not supported when defining a table.
 - Delta Live Tables-enabled pipelines cannot publish to Delta Sharing.
- To handle updates rather than inserts, use the MERGE INTO command of Delta Lake.
- The MERGE INTO command handles updates and inserts in a single operation.

Kaniz
Community Manager
Community Manager

Hi @dowdark , Thank you for posting your question in our community! We are happy to assist you.

To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!

Manisha_Jena
New Contributor III
New Contributor III

Hi @dowdark Just a friendly follow-up. Have you had the opportunity to go through my colleague's response to your inquiry? Was it beneficial, or do you still require further assistance? Your response would be highly valued.

Manisha_Jena
New Contributor III
New Contributor III

Hi @dowdark
What is the error that you get when the pipeline tries to update the rows instead of performing an insert? That should give us more info about the problem

Please raise an SF case with us with this error and its complete stack trace.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.