hi @Srajole ,
There are a bunch of possibilities as to why the data is not being written into the table -
You’re writing to a path different from the table’s storage location, or using a write mode that doesn’t replace data as expected.
spark.sql("DESCRIBE DETAIL my_table").select("location").show(truncate=False)
Ensure the .write.format("delta").save(path) or .saveAsTable("my_table") matches that location.
If you use append, check that partitions and filters match what your downstream queries expect.
Your DataFrame has zero rows at the write stage (e.g., filters remove all rows, or join keys don’t match). Could you do a simple count on the dataframe before actually writing to a table.
If the target Delta table is partitioned and you’re writing with dynamic partition overwrite or partition filters, no partitions may match. Make sure the partitions match.
Overwriting a partitioned table without specifying overwriteSchema may drop existing data but not write the new batch if the partition columns mismatch.
Hope this helps!