i'm currently trying to replicate a existing pipeline that uses standard RDBMS. No experience in DataBricks at all
I have about 4-5 tables (much like dimensions) with different events types and I want to my pipeline output a streaming table as final output to facilitate processing in another next pipeline.
My problem is that one of the tables defines the campaign of the event and each other table is events relative to that campaign. And unfortunately the way the data is uploaded to cloud the data isn't syncronized soo we can have events that aren't defined already.
The way I designed the pipeline is by segregating all the still not defined data in a table, and by each update, i'll join the segregated data again, after that I union with non segregated and tries to apply_change in target table.
The problem: after this new defining data arrives, it'll consider a update rather then a insert on the source resulting in a error.
There's any way to write this new data as a new data rather than a update on source? I don't want the pipeline to reprocess everything since the data is quite considerably