<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Triggering Downstream Workflow in Databricks from New Inserts in Snowflake in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/114573#M44872</link>
    <description>&lt;P&gt;Hey &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/102548"&gt;@Brahmareddy&lt;/a&gt;, I ended up creating a Delta table as a mirror of the source Snowflake table (accessed via Lakehouse Federation). I set up logic to append only new records to the Delta table based on a timestamp column—so only records where the timestamp is greater than the current max get added&lt;/P&gt;&lt;P&gt;Then I use readStream in append mode to write those new records to a staging Delta table. The downstream process picks up from this staging table—so for example, it processes new items like 3, 4, 5—and then I delete the processed records from the staging table to ensure only new data gets handled incrementally.&lt;/P&gt;&lt;P&gt;What do you think of this approach? Am I overcomplicating it?&lt;/P&gt;</description>
    <pubDate>Fri, 04 Apr 2025 22:35:31 GMT</pubDate>
    <dc:creator>abelian-grape</dc:creator>
    <dc:date>2025-04-04T22:35:31Z</dc:date>
    <item>
      <title>Triggering Downstream Workflow in Databricks from New Inserts in Snowflake</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/113974#M44687</link>
      <description>&lt;P class=""&gt;Hi Databricks experts,&lt;/P&gt;&lt;P class=""&gt;I have a table in Snowflake that tracks newly added items, and a downstream data processing workflow that needs to be triggered whenever new items are added. I'm currently using Lakehouse Federation to query the Snowflake tables in Databricks.&lt;/P&gt;&lt;P class=""&gt;How can I set up a mechanism to trigger the downstream data processing step with the newly added items? For example, if table X in Snowflake receives a new insert with item_id = 84848, the workflow should be triggered to run analysis based on this item_id. The trigger can be either interval-based or event-driven.&lt;/P&gt;&lt;P class=""&gt;What would be the best approach to implement this in databricks?&lt;/P&gt;</description>
      <pubDate>Sat, 29 Mar 2025 17:27:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/113974#M44687</guid>
      <dc:creator>abelian-grape</dc:creator>
      <dc:date>2025-03-29T17:27:53Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering Downstream Workflow in Databricks from New Inserts in Snowflake</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/113987#M44696</link>
      <description>&lt;P&gt;Hi&amp;nbsp;abelian-grape,&lt;/P&gt;&lt;P&gt;Great question! Since you're using Lakehouse Federation to access the Snowflake table, and Databricks can't directly stream from or listen to inserts in Snowflake, the best approach is to use an interval-based polling mechanism in Databricks. You can set up a scheduled Databricks Job (or a simple notebook) that runs every few minutes, queries the Snowflake table via Lakehouse Federation, and checks for any new item_ids based on a timestamp or an incrementing ID column. If new items are found, you can trigger your downstream workflow—for example, by chaining tasks in a Databricks Workflow or using a REST API call to another job. To avoid reprocessing the same items, store the last processed timestamp or item_id in a Delta table or a control table. While it’s not true event-driven processing, this pattern is reliable and works well with external sources like Snowflake. Let me know if you want help setting up the polling logic or job scheduling!&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Brahma&lt;/P&gt;</description>
      <pubDate>Sun, 30 Mar 2025 02:27:37 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/113987#M44696</guid>
      <dc:creator>Brahmareddy</dc:creator>
      <dc:date>2025-03-30T02:27:37Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering Downstream Workflow in Databricks from New Inserts in Snowflake</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/114573#M44872</link>
      <description>&lt;P&gt;Hey &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/102548"&gt;@Brahmareddy&lt;/a&gt;, I ended up creating a Delta table as a mirror of the source Snowflake table (accessed via Lakehouse Federation). I set up logic to append only new records to the Delta table based on a timestamp column—so only records where the timestamp is greater than the current max get added&lt;/P&gt;&lt;P&gt;Then I use readStream in append mode to write those new records to a staging Delta table. The downstream process picks up from this staging table—so for example, it processes new items like 3, 4, 5—and then I delete the processed records from the staging table to ensure only new data gets handled incrementally.&lt;/P&gt;&lt;P&gt;What do you think of this approach? Am I overcomplicating it?&lt;/P&gt;</description>
      <pubDate>Fri, 04 Apr 2025 22:35:31 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-downstream-workflow-in-databricks-from-new-inserts-in/m-p/114573#M44872</guid>
      <dc:creator>abelian-grape</dc:creator>
      <dc:date>2025-04-04T22:35:31Z</dc:date>
    </item>
  </channel>
</rss>

