Hi @ClarkElliott
Good day!!
Cause
Databricks Runtime versions 11.3 LTS and above do not support the TIMESTAMP_NANOS
type in open source Apache Spark and Databricks Runtime. If a Parquet file contains fields with the TIMESTAMP_NANOS
type, attempts to read it will fail with an Illegal Parquet Type
exception. As a result, schema inference will also fail, since Spark cannot interpret the unsupported timestamp type.
To restore the behavior before Spark 3.2, you can set spark.sql.legacy.parquet.nanosAsLong to true.
Reference: https://spark.apache.org/docs/4.0.0/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32:~:tex....
You can add the below configuration to the DLT pipeline settings.
spark.sql.legacy.parquet.nanosAsLong true

Kindly let me know if you have any questions on this.