cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to Add Multiple Columns in Single ALTER TABLE Statement on Iceberg Table via Unity REST Catal

Ashok_Vengala
New Contributor

Hello Databricks Team,

I have implemented code to integrate the Iceberg Unity REST Catalog with the Teradata OTF engine and successfully performed read and write operations, following the documentation at https://docs.databricks.com/aws/en/external-access/iceberg#gsc.tab=0. However, I am encountering the following problem:

When attempting to add multiple columns in a single ALTER TABLE statement on an Iceberg table managed via Databricks Unity Catalog and cloud storage (AWS/GCP/AZURE), the operation fails with the error:

Commit failed: Failed to commit to the table, requirement failed: There may be at most 1 SetCurrentSchema metadata update.

Steps to reproduce:
1) Create a table with two columns.
2) Add a single column using ALTER TABLE ... ADD ... (works as expected).
3) Attempt to add two columns in one statement: ALTER TABLE ... ADD col1 type, ADD col2 type; (fails with the above error).



I do not have access to the Databricks Unity Catalog code base to debug this issue, as it is owned by the Databricks team. Could you clarify if this is a known limitation or a bug in the REST Catalog implementation?
1 REPLY 1

nayan_wylde
Honored Contributor III

This error stems from the Iceberg table metadata update constraints enforced by the Unity Catalog's REST API. Specifically, the Iceberg REST Catalog currently does not support multiple schema changes in a single commit. Each ALTER TABLE operation that modifies the schema (e.g., adding columns) triggers a SetCurrentSchema update, and the system restricts this to one per commit.

To avoid this error:

  • Add columns one at a time using separate ALTER TABLE statements.
  • If automating schema evolution, ensure your logic batches column additions sequentially.
columns_comments = {
    "col1": "comment1",
    "col2": "comment2",
    ....
    "col236": "comment236",
}

for col, comment in columns_comments.items():
    spark.sql(f"ALTER TABLE table_name CHANGE COLUMN {col} SET COMMENT '{comment}'")

Consider raising a support ticket or feature request with Databricks if this limitation impacts your workflow significantly.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now