<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Concurrent append exception - Two streaming sources writing to same record on the delta table in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/concurrent-append-exception-two-streaming-sources-writing-to/m-p/76210#M35161</link>
    <description>&lt;P&gt;Hi All, I have a scenario where there are 2 streaming sources Stream1( id, col1, col2) and Stream 2( id, col3, col4) and my delta table has columns (id, col1, col2, col3, col4).&amp;nbsp;&lt;/P&gt;&lt;P&gt;My requirement is to insert the record into the delta table if the corresponding is record is not present and would like to update corresponding stream values if the id record is already present on the delta table.&lt;/P&gt;&lt;P&gt;I have no control on the source to do stream joins by adding watemark.&lt;/P&gt;&lt;P&gt;I tried implementing this using 2 merge statements one for each of the streaming source, but i am facing concurrent append exception when both the streams has same id record at the same time.&lt;/P&gt;&lt;P&gt;For now, to implement this I have used union of the streaming sources and foreachbatch. In the foreach batch, I would filter the records to 2 separate data frames and apply merge with delta table again. This is working as the merge statements are executing in sequence instead of parallel execution.&lt;/P&gt;&lt;P&gt;Could someone please help if there is any better way to implement this solution in databricks.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 01 Jul 2024 04:32:02 GMT</pubDate>
    <dc:creator>kmaley</dc:creator>
    <dc:date>2024-07-01T04:32:02Z</dc:date>
    <item>
      <title>Concurrent append exception - Two streaming sources writing to same record on the delta table</title>
      <link>https://community.databricks.com/t5/data-engineering/concurrent-append-exception-two-streaming-sources-writing-to/m-p/76210#M35161</link>
      <description>&lt;P&gt;Hi All, I have a scenario where there are 2 streaming sources Stream1( id, col1, col2) and Stream 2( id, col3, col4) and my delta table has columns (id, col1, col2, col3, col4).&amp;nbsp;&lt;/P&gt;&lt;P&gt;My requirement is to insert the record into the delta table if the corresponding is record is not present and would like to update corresponding stream values if the id record is already present on the delta table.&lt;/P&gt;&lt;P&gt;I have no control on the source to do stream joins by adding watemark.&lt;/P&gt;&lt;P&gt;I tried implementing this using 2 merge statements one for each of the streaming source, but i am facing concurrent append exception when both the streams has same id record at the same time.&lt;/P&gt;&lt;P&gt;For now, to implement this I have used union of the streaming sources and foreachbatch. In the foreach batch, I would filter the records to 2 separate data frames and apply merge with delta table again. This is working as the merge statements are executing in sequence instead of parallel execution.&lt;/P&gt;&lt;P&gt;Could someone please help if there is any better way to implement this solution in databricks.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 01 Jul 2024 04:32:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/concurrent-append-exception-two-streaming-sources-writing-to/m-p/76210#M35161</guid>
      <dc:creator>kmaley</dc:creator>
      <dc:date>2024-07-01T04:32:02Z</dc:date>
    </item>
    <item>
      <title>Re: Concurrent append exception - Two streaming sources writing to same record on the delta table</title>
      <link>https://community.databricks.com/t5/data-engineering/concurrent-append-exception-two-streaming-sources-writing-to/m-p/76227#M35166</link>
      <description>&lt;P&gt;I would keep both write operations separate, i.e. they should write in own tables/partitions. In later stages (e.g. silver), you can easily merge them.&lt;/P&gt;</description>
      <pubDate>Mon, 01 Jul 2024 08:07:32 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/concurrent-append-exception-two-streaming-sources-writing-to/m-p/76227#M35166</guid>
      <dc:creator>Witold</dc:creator>
      <dc:date>2024-07-01T08:07:32Z</dc:date>
    </item>
  </channel>
</rss>

