Iโm trying to implement a streaming pipeline that will run hourly using Spark Structured Streaming, Scala and Delta tables. The pipeline will process different items with their details.
The source are delta tables that already exists, written hourly using the "streamWrite" command. The output should be another delta table that takes data from the source table, performs some transformations and writes to the destination table.
The problem Iโm facing is that at different moments in time, the source table will bring new versions of items that were processed in the past (these are not duplicated messages, just an updated version of the same item). For these cases I need to update the item in the destination table in order to keep only the latest version.
Additionally, for some cases I need to use as source 2 streaming table and join them. Which blocks me from using "foreachBatch".
According to this, Structured Streaming can only be used on โappendโ mode, but for my use case I would need to update the data when writing.
Is there a way to make this work?
I feel that this should be a pretty common scenario that many implementations of streaming will have to face at some point, but I wasnโt able to find a way around it or any other published solutions so far.