Hello, I have a data bricks question. A Dataframe job that writes in an s3 bucket usually takes 8 minutes to finish, but now it takes from 8 to 9 hours to complete. Does anybody have some clues about this behavior?
the data frame size is about 300 or 400 records
it is a simple query in a delta table:
val results = spark
.table("table")
.filter()
.filter(by_date)
.drop(some_columns")
.select(a_struct_field)
.withColumn("image", image)
listofString.foreach { mystring =>
println(s"start writing .json to S3 for ${results}")
results
.filter($"struct.field.result" === results)
.coalesce(1)
.write
.mode(SaveMode.Overwrite)
.json(s"${filePath}/temp_${results}")
println(s"complete writing .json to S3 for ${results}")
}
Thanks in advance