cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DLT pipeline - silver table, joining streaming data

ksenija
Contributor

Hello!

I'm trying to do my modeling in DLT pipelines. For bronze, I created 3 streaming views. When I try to join them to create silver table, I got an error that I can't join stream and stream without watermarks. I tried adding them but then I got no data. Does anyone know how to add watermarks to get all necessary data or is it possible to do it without watermarks?

1 ACCEPTED SOLUTION

Accepted Solutions

Ravivarma
New Contributor III
New Contributor III

Hello @ksenija ,

Greetings of the day!

Both streaming tables with a 1-day watermark and materialized views have their own advantages for the above use case!

Using streaming tables with a 1-day watermark can be helpful for capturing changes in real-time if your data is continuously updated. However, please note that data loss can occur if some records arrive later than the watermark, as they might be considered late and dropped. To prevent this, you can enable the "withEventTimeOrder" option when processing the initial snapshot, ensuring no data is dropped during this phase.

On the other hand, materialized views are helpful for pre-computing and storing query results for fast access. They are particularly useful for complex and resource-intensive queries. However, please note that they need to be refreshed periodically to keep up with changes in the base tables.

View solution in original post

3 REPLIES 3

Ravivarma
New Contributor III
New Contributor III

Hello @ksenija ,

Greetings!

Streaming uses watermarks to control the threshold for how long to continue processing updates for a given state entity. Common examples of state entities include:

  • Aggregations over a time window.

  • Unique keys in a join between two streams.

When you declare a watermark, you specify a timestamp field and a watermark threshold on a streaming DataFrame. As new data arrives, the state manager tracks the most recent timestamp in the specified field and processes all records within the lateness threshold.

The following example applies a 10 minute watermark threshold to a windowed count:

 
%Python
from pyspark.sql.functions import window

(df
  .withWatermark("event_time", "10 minutes")
  .groupBy(
    window("event_time", "5 minutes"),
    "id")
  .count()
)

In this example:

  • The event_time column is used to define a 10 minute watermark and a 5 minute tumbling window.

  • A count is collected for each id observed for each non-overlapping 5 minute windows.

  • State information is maintained for each count until the end of window is 10 minutes older than the latest observed event_time.

You can read more about watermark here: https://docs.databricks.com/en/structured-streaming/watermarks.html

https://www.databricks.com/blog/feature-deep-dive-watermarking-apache-spark-structured-streaming

Regards,

Ravi

Hi Ravi,

Thanks! What would you suggest for daily import of data while using DLT pipeline? Using streaming tables with 1 day watermark or to use materialized view?

Ravivarma
New Contributor III
New Contributor III

Hello @ksenija ,

Greetings of the day!

Both streaming tables with a 1-day watermark and materialized views have their own advantages for the above use case!

Using streaming tables with a 1-day watermark can be helpful for capturing changes in real-time if your data is continuously updated. However, please note that data loss can occur if some records arrive later than the watermark, as they might be considered late and dropped. To prevent this, you can enable the "withEventTimeOrder" option when processing the initial snapshot, ensuring no data is dropped during this phase.

On the other hand, materialized views are helpful for pre-computing and storing query results for fast access. They are particularly useful for complex and resource-intensive queries. However, please note that they need to be refreshed periodically to keep up with changes in the base tables.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group