01-22-2024 08:08 PM
I need to to set up Iceberg tables in Databricks environment, but the data resides in an S3 bucket. Then read these tables by running SQL queries.
Databricks environment has access to S3. This is done by
Note: Unity catalog has been enabled in our environment.
Access to S3 from databricks environment was tested by copying from S3 into DBFS. This operation was successful.
Tried to create Iceberg tables in running SQL commands from SQL Editor and from Databricks notebook environment by running Python code and calling spark.sql()
However, we were unsuccessful in setting up Icebergs.
When PySpark code was run to create iceberg table by providing the location of S3 and access key and secret key, encountered an error “Data source format iceberg is not supported in Unit Catalog” See below screenshot.
When the code was run against Hive metastore
I got a java exception “Iceberg is not valid Spark SQL data source”
Also, we tried iceberg and apache-iceber Python packages. That did not work as well.
Tried many things from various tech foruns including Demio and Community.databricks.com, but in vain.
References used:
https://www.dremio.com/blog/getting-started-with-apache-iceberg-in-databricks/
Cluster configurations:
What support I need from Databricks community?
01-25-2024 12:06 PM
@JohnsonBDSouza - could you please let me know if you had a chance to review the Uniform feature that allows to create iceberg tables from the delta format.
Based on what i could understand from the above, you can create a delta table and use the below example
CREATE TABLE T
TBLPROPERTIES(
'delta.columnMapping.mode' = 'name',
'delta.universalFormat.enabledFormats' = 'iceberg')
AS
SELECT * FROM source_table;
Please refer to the documentation on pre-requisites, configs to use and limitations associated with using uniform https://docs.databricks.com/en/delta/uniform.html
07-18-2025 02:55 AM
Hi @ shan_chandra
While I am doing a POC on databricks Iceberg Format ,The below code is working Fine for me
In databricks Notebook
%sql
CREATE TABLE genai_demo.default.iceberg_table2
TBLPROPERTIES(
'delta.columnMapping.mode' = 'name',
'delta.universalFormat.enabledFormats' = 'iceberg',
'delta.enableIcebergCompatV2' = 'true')
AS
SELECT * FROM catalog.db.table;
Details which I used are :
01-09-2025 06:33 PM
It looks like Databricks making things difficult to use iceberg tables. There is no clear online documentation or steps provided to use with plain spark & spark sql, and the errors thrown in the Databricks environment are very cryptic.
They wanted to make things difficult for the customers
07-19-2025 10:32 AM
@JohnsonBDSouza , @PujithaKarnati , @Venkat5 ,
There are 3 concepts to use Iceberg format in databricks based on recent updates in DAIS 2025.
1) Managed Iceberg tables 2) Foreign Iceberg tables 3) Enabling Iceberg reads on delta tables. Please refer below links for detailed explanation and reference slides also.
Thanks.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now