<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic DLT pipeline - silver table, joining streaming data in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78120#M35470</link>
    <description>&lt;P&gt;Hello!&lt;/P&gt;&lt;P&gt;I'm trying to do my modeling in DLT pipelines. For bronze, I created 3 streaming views. When I try to join them to create silver table, I got an error that I can't join stream and stream without watermarks. I tried adding them but then I got no data. Does anyone know how to add watermarks to get all necessary data or is it possible to do it without watermarks?&lt;/P&gt;</description>
    <pubDate>Wed, 10 Jul 2024 14:14:57 GMT</pubDate>
    <dc:creator>ksenija</dc:creator>
    <dc:date>2024-07-10T14:14:57Z</dc:date>
    <item>
      <title>DLT pipeline - silver table, joining streaming data</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78120#M35470</link>
      <description>&lt;P&gt;Hello!&lt;/P&gt;&lt;P&gt;I'm trying to do my modeling in DLT pipelines. For bronze, I created 3 streaming views. When I try to join them to create silver table, I got an error that I can't join stream and stream without watermarks. I tried adding them but then I got no data. Does anyone know how to add watermarks to get all necessary data or is it possible to do it without watermarks?&lt;/P&gt;</description>
      <pubDate>Wed, 10 Jul 2024 14:14:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78120#M35470</guid>
      <dc:creator>ksenija</dc:creator>
      <dc:date>2024-07-10T14:14:57Z</dc:date>
    </item>
    <item>
      <title>Re: DLT pipeline - silver table, joining streaming data</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78169#M35483</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/96755"&gt;@ksenija&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Greetings!&lt;/P&gt;
&lt;P&gt;Streaming uses watermarks to control the threshold for how long to continue processing updates for a given state entity. Common examples of state entities include:&lt;/P&gt;
&lt;UL class="simple"&gt;
&lt;LI&gt;
&lt;P&gt;Aggregations over a time window.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Unique keys in a join between two streams.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When you declare a watermark, you specify a timestamp field and a watermark threshold on a streaming DataFrame. As new data arrives, the state manager tracks the most recent timestamp in the specified field and processes all records within the lateness threshold.&lt;/P&gt;
&lt;P&gt;The following example applies a 10 minute watermark threshold to a windowed count:&lt;/P&gt;
&lt;DIV class="highlight-python notranslate"&gt;
&lt;DIV class="highlight"&gt;
&lt;DIV id="tinyMceEditor_59c922f5181697Ravivarma_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="highlight-header"&gt;&lt;SPAN class="highlight-header__lang"&gt;%Python&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;PRE&gt;&lt;SPAN class="kn"&gt;from&lt;/SPAN&gt; &lt;SPAN class="nn"&gt;pyspark.sql.functions&lt;/SPAN&gt; &lt;SPAN class="kn"&gt;import&lt;/SPAN&gt; &lt;SPAN class="n"&gt;window&lt;/SPAN&gt;

&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="n"&gt;df&lt;/SPAN&gt;
  &lt;SPAN class="o"&gt;.&lt;/SPAN&gt;&lt;SPAN class="n"&gt;withWatermark&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;"event_time"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="s2"&gt;"10 minutes"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt;
  &lt;SPAN class="o"&gt;.&lt;/SPAN&gt;&lt;SPAN class="n"&gt;groupBy&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
    &lt;SPAN class="n"&gt;window&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;"event_time"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="s2"&gt;"5 minutes"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;),&lt;/SPAN&gt;
    &lt;SPAN class="s2"&gt;"id"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt;
  &lt;SPAN class="o"&gt;.&lt;/SPAN&gt;&lt;SPAN class="n"&gt;count&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt;
&lt;SPAN class="p"&gt;)&lt;/SPAN&gt;
&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;In this example:&lt;/P&gt;
&lt;UL class="simple"&gt;
&lt;LI&gt;
&lt;P&gt;The&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="docutils literal notranslate"&gt;&lt;SPAN class="pre"&gt;event_time&lt;/SPAN&gt;&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;column is used to define a 10 minute watermark and a 5 minute tumbling window.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;A count is collected for each&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="docutils literal notranslate"&gt;&lt;SPAN class="pre"&gt;id&lt;/SPAN&gt;&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;observed for each non-overlapping 5 minute windows.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;State information is maintained for each count until the end of window is 10 minutes older than the latest observed&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="docutils literal notranslate"&gt;&lt;SPAN class="pre"&gt;event_time&lt;/SPAN&gt;&lt;/CODE&gt;.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can read more about watermark here:&amp;nbsp;&lt;A href="https://docs.databricks.com/en/structured-streaming/watermarks.html" target="_blank"&gt;https://docs.databricks.com/en/structured-streaming/watermarks.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.databricks.com/blog/feature-deep-dive-watermarking-apache-spark-structured-streaming" target="_blank"&gt;https://www.databricks.com/blog/feature-deep-dive-watermarking-apache-spark-structured-streaming&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Ravi&lt;/P&gt;</description>
      <pubDate>Wed, 10 Jul 2024 19:11:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78169#M35483</guid>
      <dc:creator>Ravivarma</dc:creator>
      <dc:date>2024-07-10T19:11:55Z</dc:date>
    </item>
    <item>
      <title>Re: DLT pipeline - silver table, joining streaming data</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78276#M35496</link>
      <description>&lt;P&gt;Hi Ravi,&lt;/P&gt;&lt;P&gt;Thanks! What would you suggest for daily import of data while using DLT pipeline? Using streaming tables with 1 day watermark or to use materialized view?&lt;/P&gt;</description>
      <pubDate>Thu, 11 Jul 2024 08:42:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78276#M35496</guid>
      <dc:creator>ksenija</dc:creator>
      <dc:date>2024-07-11T08:42:55Z</dc:date>
    </item>
    <item>
      <title>Re: DLT pipeline - silver table, joining streaming data</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78783#M35599</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/96755"&gt;@ksenija&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Greetings of the day!&lt;/P&gt;
&lt;P&gt;Both streaming tables with a 1-day watermark and materialized views have their own advantages for the above use case!&lt;/P&gt;
&lt;P&gt;Using streaming tables with a 1-day watermark can be helpful for capturing changes in real-time if your data is continuously updated. However, please note that data loss can occur if some records arrive later than the watermark, as they might be considered late and dropped. To prevent this, you can enable the "withEventTimeOrder" option when processing the initial snapshot, ensuring no data is dropped during this phase.&lt;/P&gt;
&lt;P&gt;On the other hand, materialized views are helpful for pre-computing and storing query results for fast access. They are particularly useful for complex and resource-intensive queries. However, please note that they need to be refreshed periodically to keep up with changes in the base tables.&lt;/P&gt;</description>
      <pubDate>Mon, 15 Jul 2024 11:20:46 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-pipeline-silver-table-joining-streaming-data/m-p/78783#M35599</guid>
      <dc:creator>Ravivarma</dc:creator>
      <dc:date>2024-07-15T11:20:46Z</dc:date>
    </item>
  </channel>
</rss>

