Databricks does not allow you to set a global default for all TBLPROPERTIES. However, you can use the spark.databricks.delta.properties.defaults configuration key to set defaults for new Delta tables created in a specific session or pipeline.
If you want every new Delta table to automatically include:
sql
TBLPROPERTIES ("delta.feature.timestampNtz" = "supported")
you should set it like this:
SET spark.databricks.delta.properties.defaults.feature.timestampNtz = supported;
You can also set it programmatically in a notebook or a cluster initialization script:
spark.conf.set("spark.databricks.delta.properties.defaults.feature.timestampNtz", "supported")
If Pipeline Configuration Had No Effect
If you tried to define this in the pipeline configuration JSON and did not see any change, that is normal. Current Delta Live Tables and Lakeflow Declarative Pipelines do not automatically apply the spark.databricks.delta.properties.defaults.* settings unless they are explicitly passed to the Spark session at runtime.
As a workaround, add the setting in the โspark_confโ block of your pipeline configuration.
Alternatively, set it in a cluster policy or a workspace-level notebook execution environment.
Therefore, the only reliable method is to use spark.databricks.delta.properties.defaults.delta.feature.timestampNtz=supported at the cluster or session level. This will act as your default TBLPROPERTIES for all new tables.
Some additional references
https://docs.databricks.com/aws/en/sql/language-manual/data-types/timestamp-ntz-type
https://docs.databricks.com/aws/en/delta/table-properties
https://stackoverflow.com/questions/70168370/how-to-specify-delta-table-properties-when-writing-a-st...