3 hours ago
i need to change delta table to managed iceberg table by writing it metadata and using same parquet data and not rewriting the table efficiently. Dont want to use Delta uniform format
3 hours ago
Greetings @ajay_wavicle , I did some digging as this is a "not so common" request. With that said, here is what I found.
What youโre aiming for is very specific: you want to end up with a Unity Catalog managed Apache Iceberg table, reusing the exact same Parquet files that back an existing Delta table, writing only new metadata (no data rewrite), and explicitly without using Delta UniForm.
Letโs walk through whatโs actually supported today, what isnโt, and what your real options are.
Whatโs supported today (and what isnโt)
Today, the only documented, metadata-only, no-rewrite way to expose an existing Delta tableโs Parquet files to Iceberg readers is to enable Iceberg compatibility via Delta UniForm. UniForm generates Iceberg metadata alongside the Delta transaction log and allows Iceberg engines to read the table without rewriting any data.
Managed Iceberg tables in Unity Catalog are a different thing. They are first-class Iceberg objects: writable via the Iceberg REST catalog and fully integrated with Databricks platform features. However, creating a managed Iceberg table is typically done by creating a brand-new Iceberg table (for example via CTAS), which rewrites data.
Internal guidance today is to avoid migrating existing Delta tables to managed Iceberg unless you have a concrete need for Iceberg writes from external engines. Databricksโ longer-term direction is format convergence, and the recommended interoperability path from Delta remains UniForm โ which youโve already ruled out.
So thereโs no magic switch hiding somewhere.
Practical options
Option A โ Minimal friction, no rewrite: enable UniForm on the Delta table
This keeps the table as a Delta table managed by Unity Catalog, but generates and maintains Iceberg metadata so Iceberg clients can read it. No Parquet files are rewritten.
This requires a UC-registered table and column mapping enabled. Writes require DBR 14.3+. You enable it with ALTER TABLE and can verify via SHOW TBLPROPERTIES or DESCRIBE EXTENDED.
Example:
ALTER TABLE catalog.schema.delta_tbl SET TBLPROPERTIES (
'delta.columnMapping.mode' = 'name',
'delta.enableIcebergCompatV2' = 'true',
'delta.universalFormat.enabledFormats' = 'iceberg'
);
If you need to force a metadata sync:
MSCK REPAIR TABLE catalog.schema.delta_tbl SYNC METADATA;
Option B โ True managed Iceberg object: create a new managed Iceberg table
If you need a first-class managed Iceberg table (for native Iceberg writes via REST catalog and full platform integration), the supported path is to create a new Iceberg table using CTAS or INSERT INTO. This rewrites data.
Example:
CREATE TABLE catalog.schema.ice_tbl
USING ICEBERG
AS SELECT * FROM catalog.schema.delta_tbl;
Option C โ Two-step โno rewriteโ path via foreign Iceberg (advanced, caveats apply)
If you absolutely must avoid a rewrite and must end with a managed Iceberg table, there is a possible but non-trivial route:
First, use open-source Iceberg tooling to create or register a foreign Iceberg table that points at the existing Parquet files. This is a metadata-only operation, but it requires constructing valid Iceberg metadata and manifests outside of Unity Catalog (for example in Glue or HMS). This is not a productized Databricks workflow.
Second, in Unity Catalog, convert that foreign Iceberg table into a managed Iceberg table. UC does support foreign-Iceberg โ managed-Iceberg conversions, and this step does not rewrite data.
This works, but itโs operationally complex, requires external catalog plumbing, and is not the โin-place flipโ most teams expect. Itโs best reserved for cases where external Iceberg writes are mandatory and a rewrite is truly impossible.
Important limitations and gotchas
If you later enable Liquid Clustering on a managed Iceberg table, deletion vectors and row tracking must be disabled first. Otherwise youโll hit errors due to Iceberg v2 concurrency limitations.
If you do consider UniForm at any point, be aware that Delta and Iceberg versions do not advance in lockstep. Databricks tracks converted_delta_version and converted_delta_timestamp to relate them, and you may need to explicitly sync metadata in some workflows.
Bottom line / recommendation
Given your constraints โ no rewrite and no UniForm โ there is currently no documented, in-place, metadata-only command in Databricks that converts a Delta table directly into a Unity Catalog managed Iceberg table.
Your supported choices today are:
โข Enable UniForm for no-rewrite Iceberg read interoperability
โข Accept a one-time CTAS rewrite to create a managed Iceberg table
โข Or pursue the foreign Iceberg โ managed Iceberg two-step using external tooling if you must end with managed Iceberg without rewriting Parquet data
Thatโs the honest state of the world right now.
Hope this helps you address your task.
Cheers, Louis.
25m ago
3 hours ago
Greetings @ajay_wavicle , I did some digging as this is a "not so common" request. With that said, here is what I found.
What youโre aiming for is very specific: you want to end up with a Unity Catalog managed Apache Iceberg table, reusing the exact same Parquet files that back an existing Delta table, writing only new metadata (no data rewrite), and explicitly without using Delta UniForm.
Letโs walk through whatโs actually supported today, what isnโt, and what your real options are.
Whatโs supported today (and what isnโt)
Today, the only documented, metadata-only, no-rewrite way to expose an existing Delta tableโs Parquet files to Iceberg readers is to enable Iceberg compatibility via Delta UniForm. UniForm generates Iceberg metadata alongside the Delta transaction log and allows Iceberg engines to read the table without rewriting any data.
Managed Iceberg tables in Unity Catalog are a different thing. They are first-class Iceberg objects: writable via the Iceberg REST catalog and fully integrated with Databricks platform features. However, creating a managed Iceberg table is typically done by creating a brand-new Iceberg table (for example via CTAS), which rewrites data.
Internal guidance today is to avoid migrating existing Delta tables to managed Iceberg unless you have a concrete need for Iceberg writes from external engines. Databricksโ longer-term direction is format convergence, and the recommended interoperability path from Delta remains UniForm โ which youโve already ruled out.
So thereโs no magic switch hiding somewhere.
Practical options
Option A โ Minimal friction, no rewrite: enable UniForm on the Delta table
This keeps the table as a Delta table managed by Unity Catalog, but generates and maintains Iceberg metadata so Iceberg clients can read it. No Parquet files are rewritten.
This requires a UC-registered table and column mapping enabled. Writes require DBR 14.3+. You enable it with ALTER TABLE and can verify via SHOW TBLPROPERTIES or DESCRIBE EXTENDED.
Example:
ALTER TABLE catalog.schema.delta_tbl SET TBLPROPERTIES (
'delta.columnMapping.mode' = 'name',
'delta.enableIcebergCompatV2' = 'true',
'delta.universalFormat.enabledFormats' = 'iceberg'
);
If you need to force a metadata sync:
MSCK REPAIR TABLE catalog.schema.delta_tbl SYNC METADATA;
Option B โ True managed Iceberg object: create a new managed Iceberg table
If you need a first-class managed Iceberg table (for native Iceberg writes via REST catalog and full platform integration), the supported path is to create a new Iceberg table using CTAS or INSERT INTO. This rewrites data.
Example:
CREATE TABLE catalog.schema.ice_tbl
USING ICEBERG
AS SELECT * FROM catalog.schema.delta_tbl;
Option C โ Two-step โno rewriteโ path via foreign Iceberg (advanced, caveats apply)
If you absolutely must avoid a rewrite and must end with a managed Iceberg table, there is a possible but non-trivial route:
First, use open-source Iceberg tooling to create or register a foreign Iceberg table that points at the existing Parquet files. This is a metadata-only operation, but it requires constructing valid Iceberg metadata and manifests outside of Unity Catalog (for example in Glue or HMS). This is not a productized Databricks workflow.
Second, in Unity Catalog, convert that foreign Iceberg table into a managed Iceberg table. UC does support foreign-Iceberg โ managed-Iceberg conversions, and this step does not rewrite data.
This works, but itโs operationally complex, requires external catalog plumbing, and is not the โin-place flipโ most teams expect. Itโs best reserved for cases where external Iceberg writes are mandatory and a rewrite is truly impossible.
Important limitations and gotchas
If you later enable Liquid Clustering on a managed Iceberg table, deletion vectors and row tracking must be disabled first. Otherwise youโll hit errors due to Iceberg v2 concurrency limitations.
If you do consider UniForm at any point, be aware that Delta and Iceberg versions do not advance in lockstep. Databricks tracks converted_delta_version and converted_delta_timestamp to relate them, and you may need to explicitly sync metadata in some workflows.
Bottom line / recommendation
Given your constraints โ no rewrite and no UniForm โ there is currently no documented, in-place, metadata-only command in Databricks that converts a Delta table directly into a Unity Catalog managed Iceberg table.
Your supported choices today are:
โข Enable UniForm for no-rewrite Iceberg read interoperability
โข Accept a one-time CTAS rewrite to create a managed Iceberg table
โข Or pursue the foreign Iceberg โ managed Iceberg two-step using external tooling if you must end with managed Iceberg without rewriting Parquet data
Thatโs the honest state of the world right now.
Hope this helps you address your task.
Cheers, Louis.
25m ago
thanks Louis!
15m ago
@Louis_Frolio , Can you help me in moving managed uc delta table from one storage account in databricks along with its version time travel history to another cloud databricks workspace? I see deny assignment to databricks associated storage account. I want recreate the tables and other as it is.