cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Explicit transaction blocks

thushar
Contributor

I know delta tables are supporting the ACID properties and my understanding is Merge, Insert, delete, etc. are inside a transaction by default and if any error occurred during these operations, that transaction will be roll backed. I hope this understanding is correct, please confirm.

Another question is, as like in SQL Server, where we can specify explicit transaction blocks and inside that we can mention multiple transformation statements, as like

Begin Tran

Insert into Table 1

Update Table2

Delete Table3

Mreg table 4

End Tran

Is it possible to mention these kinds of explicit transaction blocks in the case of data bricks and delta tables?

1 ACCEPTED SOLUTION

Accepted Solutions

Lakshay
Esteemed Contributor
Esteemed Contributor

Hi @Thushar R​ , Your understanding is correct regarding the ACID transactions in delta tables that if any error happens during any transaction, the transaction will be rolled back.

Regarding 2nd question: Databricks does not support transaction blocks similar to one in SQL server.

You can check this document regarding acid transactions in Delta: https://docs.databricks.com/lakehouse/acid.html

View solution in original post

3 REPLIES 3

Lakshay
Esteemed Contributor
Esteemed Contributor

Hi @Thushar R​ , Your understanding is correct regarding the ACID transactions in delta tables that if any error happens during any transaction, the transaction will be rolled back.

Regarding 2nd question: Databricks does not support transaction blocks similar to one in SQL server.

You can check this document regarding acid transactions in Delta: https://docs.databricks.com/lakehouse/acid.html

pvignesh92
Honored Contributor

@Thushar R​  Yes you are right. As Delta table is keeping a transaction log and maintain version history of your data, it can easily roll back your transaction in case of a failure -> i.e. Once transaction in successfully committed, that is when the transaction log gets updated with the information of new data files. As delta engine reads the transaction log to identify the relevant files and not the underlying data store, it can easily identify the committed files alone and get you the proper outputs.

About the multi statements in a single transaction, right now Delta Lake supports only MERGE which by itself supports DELETE and UPSERT based on a condition. The condition can be set both at the source table and target table. If you are interested to deal with Snowflake data and write to Snowflake using Spark Connector, then you can achieve this. But if your source and target are within delta lake, only MERGE is supported as of now I believe.

Kaniz
Community Manager
Community Manager

Hi @Thushar R​​, We can build a thriving shared knowledge and insights community. Return and mark the best answers to contribute to our ongoing pursuit of excellence.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.