To automate the configuration of Spark on serverless compute, Databricks has removed support for manually setting most Spark configurations.
I've used userMetadata attribute to add context for all workloads that write to delta tables
I have the follow options:
dataframe operations: opption"userMetadata","xxxxx")
spark.databricks.delta.commitInfo.userMetadata for scope level
SET spark.databricks.delta.commitInfo.userMetadata for sql
what about merge operations using dataframes using serverless compute ?
what about merge operations using dataframes or sql using serverless compute ?
deltaTable.as("t")
.merge(
changesDF.as("s"),
"s.PK=t.PK")
.whenMatched().updateAll
.whenNotMatched().insertAll
.execute()