Hi!
Why are the fields discovery_time, commit_time, and archive_time NULL in cloud_files_state?
Do I need to configure anything when creating my Auto Loader?
df = spark.readStream.format("cloudFiles") \
.option("cloudFiles.format", "json") \
.option("cloudFiles.tenantId", tenantId) \
.option("cloudFiles.clientId", clientId) \
.option("cloudFiles.clientSecret", clientSecret) \
.option("cloudFiles.resourceGroup", resourceGroup) \
.option("cloudFiles.subscriptionId", subscriptionId) \
.option("cloudFiles.useNotifications", "true") \
.option("cloudFiles.includeExistingFiles", "true") \
.option("cloudFiles.schemaLocation", checkpoint_path) \
.option("cloudFiles.schemaEvolutionMode", "rescue") \
.option("recursiveFileLookup", "true") \
.option("badRecordsPath", bad_records_path)
.option("multiLine", "true")
.schema(dfSchema.schema) \
.load(sourceDir)
#Transforming dataframe stream...
df6.writeStream \
.format("delta") \
.foreachBatch(upsertToDelta) \
.option("checkpointLocation", checkpoint_path) \
.outputMode("update") \
.start(targetDir) #target folder