<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic ACID properties in delta? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/acid-properties-in-delta/m-p/48522#M28316</link>
    <description>&lt;P&gt;How are locks maintained within a Delta Lake? For instance, lets say there are 2 simple tables, customer_details and say orders. Lets say I am running a job that will say insert an order in the orders table for say $100 for a specific customerId, it should go and update (increment) the customer_details table with the order_count value by 1 and also update the order_value details by 100. Note that until the the orders table is fully updated with all the information, the customer_details table should not be updated and also, once the orders table is inserted/deleted, the customer_details table HAS to be updated with the right counts and dollars. In a traditional DB, we have this concept of savepoints where we can combine multiple CRUD operations as a 'transaction' and either fail (rollback?) everything or commit everything to the DB. How is this possible in a delta environment? While ACID capabilities exist at an individual table level, how can this be achieved in a delta lake ? (Kindly note that updating the customer_details table after the fact as a batch job is a solution but this is just a simple use case I have posted. There is a good chance that an "order" can also require data to be stored in multiple tables). Thanks in advance..&lt;/P&gt;</description>
    <pubDate>Thu, 05 Oct 2023 23:12:17 GMT</pubDate>
    <dc:creator>sriradh</dc:creator>
    <dc:date>2023-10-05T23:12:17Z</dc:date>
    <item>
      <title>ACID properties in delta?</title>
      <link>https://community.databricks.com/t5/data-engineering/acid-properties-in-delta/m-p/48522#M28316</link>
      <description>&lt;P&gt;How are locks maintained within a Delta Lake? For instance, lets say there are 2 simple tables, customer_details and say orders. Lets say I am running a job that will say insert an order in the orders table for say $100 for a specific customerId, it should go and update (increment) the customer_details table with the order_count value by 1 and also update the order_value details by 100. Note that until the the orders table is fully updated with all the information, the customer_details table should not be updated and also, once the orders table is inserted/deleted, the customer_details table HAS to be updated with the right counts and dollars. In a traditional DB, we have this concept of savepoints where we can combine multiple CRUD operations as a 'transaction' and either fail (rollback?) everything or commit everything to the DB. How is this possible in a delta environment? While ACID capabilities exist at an individual table level, how can this be achieved in a delta lake ? (Kindly note that updating the customer_details table after the fact as a batch job is a solution but this is just a simple use case I have posted. There is a good chance that an "order" can also require data to be stored in multiple tables). Thanks in advance..&lt;/P&gt;</description>
      <pubDate>Thu, 05 Oct 2023 23:12:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/acid-properties-in-delta/m-p/48522#M28316</guid>
      <dc:creator>sriradh</dc:creator>
      <dc:date>2023-10-05T23:12:17Z</dc:date>
    </item>
  </channel>
</rss>

