Hi @Mukul
Databricksโ current direction is to use Unity Catalog as an open Iceberg catalog. UC exposes tables via the Iceberg REST Catalog API, so external engines (Spark, Flink, Trino, Snowflake, PyIceberg, etc.) can read and write UC-managed Iceberg tables while Databricks keeps governance and optimization.
If data lives outside Databricks (Glue/HMS/Snowflake), Databricks can read it via Lakehouse Federation (read-only). If data is written as Delta in Databricks but needs Iceberg consumers, UniForm (Iceberg reads) lets Iceberg clients read the same data without rewrites.
Databricks does not plan to write into external catalogs, but external engines can write into UC-managed Iceberg today. Overall, the model is: UC as the central catalog, open access via Iceberg REST, single copy of data, no duplication.