06-01-2022 03:48 AM
Is it possible to have two streaming sources doing Merge into the same delta table with each source setting a different set of fields?
We are trying to create a single table which will be used by the service layer for queries. The table can be populated from multiple source tables each contributing a set of fields to the sink schema.
Sample query for the scenario:
MERGE INTO sink
USING source_a
ON source_a.id = sink.id
WHEN MATCHED THEN
UPDATE SET
id = source_a.id,
summary = source_a.summary,
description = source_a.description,
customerId = source_a.customerId
WHEN NOT MATCHED
THEN INSERT (
id,
summary,
description,
customerId
)
VALUES (
source_a.id,
source_a.summary,
source_a.description,
source_a.customerId
)
MERGE INTO sink
USING source_b
ON source_b.customer.id = sink.customerId
WHEN MATCHED THEN
UPDATE SET
customerId = source_b.customerId,
customerName = source_b.customerName,
customerIndustry = source_b.customerIndustry
I am new to streaming and was wondering whether there is some way to implement this using streaming queries. Currently seeing out of order issues when merging the data to the sink, But is there something which can be done to go around the problem.
06-01-2022 02:43 PM
Not sure about your partitioning schema, but be careful, as you can run into issues with concurrent updates to the same partition. Concurrency control | Databricks on AWS. In this case, I usually set the trigger to run once, running each merge update synchronously, and putting the job on a schedule.
What kind of "out of order" issues are you seeing?
06-02-2022 03:57 AM
Hi @Zachary Higgins
Thanks for the reply
Currently, we are also using Trigger.once so that we can handle the merge stream dependencies properly. But was wondering whether we can scale our pipeline to be streaming by changing the Trigger duration in the future
Thanks for the heads up regarding Concurrency, I was assuming a Pessimistic concurrency control where delta would handle concurrent writes to the same partition without exceptions or data corruption.
I don't think we can support full streaming pipelines if there can be concurrency issues
06-09-2022 12:40 AM
Hi @Zachary Higgins , We haven’t heard from you on the last response from @Harikrishnan P H , and I was checking back to see if you have a resolution. If you have any solution, please do share the same with the community as it can be helpful to others. Otherwise, we will respond with more details and try to help.
01-20-2023 10:17 AM
Hi @Kaniz Fatma
I'm looking for a similar use case where we have MULTIPLE SOURCE TABLES IN BRONZE/Silver and would need to push data into the same TARGET TABLE. We have adopted a Data Vault as our data Model for Silver. If you could provide some documentation or URL that would be great.
Excited to expand your horizons with us? Click here to Register and begin your journey to success!
Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!