I am accessing an on premise SQL Server table. The table is relatively small (10 000 rows), and I access it using
spark.read.jdbc(url=jdbcUrl, table = query)
Every day there are new records in the on prem table that I would like to append in my bronze table on the lakehouse. However there are no "InsertedOn" column or anything, and there are no obvious keys in the data that I could use to MERGE to my bronze table. So currently I am overwriting all the data every day, which does not seem like a good approach.
Is there a better way to incrementally load the data from SQL server ? Perhaps something using the Streaming Structure ?
Thank you !