I am trying to rename a delta table like this:
spark.conf.set("spark.databricks.delta.alterTable.rename.enabledOnAWS", "true")
spark.sql("ALTER TABLE db1.rz_test5 RENAME TO db1.rz_test6")
The data is on aws s3, that's why I have to use spark config in the notebook. This works fine in the interactive cluster.
But when I test it in a job using a job cluster it failed with error:
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table.
The runtime for the interactive cluster is: 13.3 LTS
no spark configuration in the cluster.
The runtime for the job cluster in the job is: 14.3 LTS
no spark configuration in the cluster.
Both clusters use the same instance profile.