@Kaniz Fatmaโ
I am also facing the same issue while using the `saveAsTable` function of DataFrameWriter. Following is the code snippet: -
import org.apache.spark.sql.functions.{col, dayofmonth, month, to_date, year}
import org.apache.spark.sql.types.DataTypes
val df = some-dataframe-here
val glueTableName = "database-name-here.table-name-here"
val s3Path = "s3a://some/path/here/"
val partitionKeys = Array("some-partition-key-here")
val dataframeWithYearMonthDay = df
.withColumn("year", year(to_date(col("createdAt"))).cast(DataTypes.FloatType))
.withColumn("month", month(to_date(col("createdAt"))).cast(DataTypes.FloatType))
.withColumn("day", dayofmonth(to_date(col("createdAt"))).cast(DataTypes.FloatType))
dataframeWithYearMonthDay.write
.partitionBy(List("year", "month", "day") ++ partitionKeys: _*)
.mode("append")
.format("parquet")
.option("path", s3Path)
.saveAsTable(glueTableName)
PFA the stack trace. Please note that the given s3 location is completely empty and I am trying to create a new table here.
Also, I am facing this issue with only one table. Not facing this issue with writing to other tables.
Please let me know if any other information is needed from my end.