Hello all,
I am relatively new in data engineering and working on a project requiring me to programmatically delete data from delta live tables. However, I found that simply stopping the streaming job and deleting rows from the delta tables caused the stream to fail once I restarted it. The only solution seems to create a new checkpoint for the stream to write to after the deletion or to delete all the entries in the parquet files. Are these the correct solutions to this problem? Which solution do people employ in such cases? Whenever I need to delete data, will I need to create a new checkpoint location or possibly parse billions of parquet records and delete their entries?
Thanks !