Currently I load multiple parquet file with this code:
df = spark.read.parquet("/mnt/dev/bronze/Voucher/*/*")
(Inside the Voucher folder, there is one folder by date. Each one containing one parquet file)
How can I add a column into this DataFrame, that contains the creation date of each parquet file ?
Thanks