Multiple streaming sources to the same delta table
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ06-01-2022 03:48 AM
Is it possible to have two streaming sources doing Merge into the same delta table with each source setting a different set of fields?
We are trying to create a single table which will be used by the service layer for queries. The table can be populated from multiple source tables each contributing a set of fields to the sink schema.
Sample query for the scenario:
MERGE INTO sink
USING source_a
ON source_a.id = sink.id
WHEN MATCHED THEN
UPDATE SET
id = source_a.id,
summary = source_a.summary,
description = source_a.description,
customerId = source_a.customerId
WHEN NOT MATCHED
THEN INSERT (
id,
summary,
description,
customerId
)
VALUES (
source_a.id,
source_a.summary,
source_a.description,
source_a.customerId
)
MERGE INTO sink
USING source_b
ON source_b.customer.id = sink.customerId
WHEN MATCHED THEN
UPDATE SET
customerId = source_b.customerId,
customerName = source_b.customerName,
customerIndustry = source_b.customerIndustry
I am new to streaming and was wondering whether there is some way to implement this using streaming queries. Currently seeing out of order issues when merging the data to the sink, But is there something which can be done to go around the problem.
- Labels:
-
Delta
-
Delta table
-
Stream Processing
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ06-01-2022 02:43 PM
Not sure about your partitioning schema, but be careful, as you can run into issues with concurrent updates to the same partition. Concurrency control | Databricks on AWS. In this case, I usually set the trigger to run once, running each merge update synchronously, and putting the job on a schedule.
What kind of "out of order" issues are you seeing?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ06-02-2022 03:57 AM
Hi @Zachary Higginsโ
Thanks for the reply
Currently, we are also using Trigger.once so that we can handle the merge stream dependencies properly. But was wondering whether we can scale our pipeline to be streaming by changing the Trigger duration in the future
Thanks for the heads up regarding Concurrency, I was assuming a Pessimistic concurrency control where delta would handle concurrent writes to the same partition without exceptions or data corruption.
I don't think we can support full streaming pipelines if there can be concurrency issues
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ01-20-2023 10:17 AM
Hi @Kaniz Fatmaโ
I'm looking for a similar use case where we have MULTIPLE SOURCE TABLES IN BRONZE/Silver and would need to push data into the same TARGET TABLE. We have adopted a Data Vault as our data Model for Silver. If you could provide some documentation or URL that would be great.