Hey @Brahmareddy, I ended up creating a Delta table as a mirror of the source Snowflake table (accessed via Lakehouse Federation). I set up logic to append only new records to the Delta table based on a timestamp columnโso only records where the timestamp is greater than the current max get added
Then I use readStream in append mode to write those new records to a staging Delta table. The downstream process picks up from this staging tableโso for example, it processes new items like 3, 4, 5โand then I delete the processed records from the staging table to ensure only new data gets handled incrementally.
What do you think of this approach? Am I overcomplicating it?