Hi @tliuzillow ,
1. Stream-static Join: Each minibatch from the streaming table (A) is joined with the entire Delta table (B).
2. Stream-stream Join: Each minibatch from the streaming table(A) is joined with minibatch from the streaming table(B).
However, as per documentation "the challenge of generating join results between two data streams is that, at any point of time, the view of the dataset is incomplete for both sides of the join making it much harder to find matches between inputs. "
This is why Spark can also keep the historical data in the buffer, which allows to match incoming data with past records, thus ensuring complete join results.
To implement this, you will use watermarking. Here is the code sample from the above documentation:
from pyspark.sql.functions import expr
impressions = spark.readStream. ...
clicks = spark.readStream. ...
# Apply watermarks on event-time columns
impressionsWithWatermark = impressions.withWatermark("impressionTime", "2 hours")
clicksWithWatermark = clicks.withWatermark("clickTime", "3 hours")
# Join with event-time constraints
impressionsWithWatermark.join(
clicksWithWatermark,
expr("""
clickAdId = impressionAdId AND
clickTime >= impressionTime AND
clickTime <= impressionTime + interval 1 hour
""")
)