<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Table listener in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148448#M52892</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;I think update trigger will be perfect solution for your scenario. Check below docs:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.databricks.com/aws/en/jobs/trigger-table-update" target="_blank"&gt;https://docs.databricks.com/aws/en/jobs/trigger-table-update&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Sun, 15 Feb 2026 17:06:25 GMT</pubDate>
    <dc:creator>szymon_dybczak</dc:creator>
    <dc:date>2026-02-15T17:06:25Z</dc:date>
    <item>
      <title>Table listener</title>
      <link>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148360#M52882</link>
      <description>&lt;P&gt;Hello Community,&lt;/P&gt;&lt;P&gt;I would like to ask whether it’s possible to define a job that checks for updates in a table at a specified frequency.&lt;/P&gt;&lt;P&gt;Here is my use case:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;Data is uploaded to a table located in &lt;STRONG&gt;Catalog A, Schema B, Table C&lt;/STRONG&gt; (a.b.c).&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;I need to transfer this data to another table, x.y.z (within the same workspace).&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Before saving the final result, several transformation steps must be applied.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I assume that creating a scheduled job would be one approach. However, is it possible to configure the job to check, for example, every 10 minutes whether new data has appeared in a.b.c, and only run if it has? Or would you recommend a different solution?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Feb 2026 18:23:21 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148360#M52882</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-02-13T18:23:21Z</dc:date>
    </item>
    <item>
      <title>Re: Table listener</title>
      <link>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148448#M52892</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;I think update trigger will be perfect solution for your scenario. Check below docs:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.databricks.com/aws/en/jobs/trigger-table-update" target="_blank"&gt;https://docs.databricks.com/aws/en/jobs/trigger-table-update&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 15 Feb 2026 17:06:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148448#M52892</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2026-02-15T17:06:25Z</dc:date>
    </item>
    <item>
      <title>Re: Table listener</title>
      <link>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148452#M52893</link>
      <description>&lt;P&gt;Thank you&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/110502"&gt;@szymon_dybczak&lt;/a&gt;&amp;nbsp;! This sounds very good! I have already tested it and it does exactly what I have wanted to achieve!&lt;BR /&gt;As a second option I found pipeline and DLT. What do you think about it? Or it is too much to my use case.&lt;/P&gt;</description>
      <pubDate>Sun, 15 Feb 2026 19:44:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148452#M52893</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-02-15T19:44:23Z</dc:date>
    </item>
    <item>
      <title>Re: Table listener</title>
      <link>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148453#M52894</link>
      <description>&lt;P&gt;Also I have a question about failure handling in case of triggered jobs. Let's say new data has come to the source table and job failed for any reason. If I rerun it or the next batch of data comes to source table, will the data from the failed job be still considered? Or we just loosing it?&lt;/P&gt;</description>
      <pubDate>Sun, 15 Feb 2026 19:49:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148453#M52894</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-02-15T19:49:42Z</dc:date>
    </item>
    <item>
      <title>Re: Table listener</title>
      <link>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148473#M52900</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;It's just a trigger mechanism, so it totally depends on how you implement your pipeline that will be responsible for consuming the data.&lt;BR /&gt;For instance if you use structure streaming based approach (i.e autoloader) then even if your pipeline fails then you can be sure that when you re-run you won't miss any data.&lt;BR /&gt;If you want to implement incremental approach yourself then you need to find a proper key for a given a table and find an attribute that will help you discover only new records that appeared/changed since last loading.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Feb 2026 07:56:19 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/table-listener/m-p/148473#M52900</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2026-02-16T07:56:19Z</dc:date>
    </item>
  </channel>
</rss>

