Unable to create Iceberg tables pointing to data in S3 and run queries against the tables.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-22-2024 08:08 PM
I need to to set up Iceberg tables in Databricks environment, but the data resides in an S3 bucket. Then read these tables by running SQL queries.
Databricks environment has access to S3. This is done by
- setting up the access by mapping the Instance Profile to the compute cluster
- AWS access key and secret key are used to connect via Spark code.
Note: Unity catalog has been enabled in our environment.
Access to S3 from databricks environment was tested by copying from S3 into DBFS. This operation was successful.
Tried to create Iceberg tables in running SQL commands from SQL Editor and from Databricks notebook environment by running Python code and calling spark.sql()
However, we were unsuccessful in setting up Icebergs.
When PySpark code was run to create iceberg table by providing the location of S3 and access key and secret key, encountered an error “Data source format iceberg is not supported in Unit Catalog” See below screenshot.
When the code was run against Hive metastore
I got a java exception “Iceberg is not valid Spark SQL data source”
Also, we tried iceberg and apache-iceber Python packages. That did not work as well.
Tried many things from various tech foruns including Demio and Community.databricks.com, but in vain.
References used:
https://www.dremio.com/blog/getting-started-with-apache-iceberg-in-databricks/
Cluster configurations:
What support I need from Databricks community?
- Detailed and specific steps to create Iceberg table and point to data in S3 via SQL or Pyspark code.
- List of libraries to attach to Compute resource, Spark variables and Environment variables to set.
- Configuration required on SQL Compute resource
- List of Python libraries required and location of repository.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-25-2024 12:06 PM
@JohnsonBDSouza - could you please let me know if you had a chance to review the Uniform feature that allows to create iceberg tables from the delta format.
Based on what i could understand from the above, you can create a delta table and use the below example
CREATE TABLE T
TBLPROPERTIES(
'delta.columnMapping.mode' = 'name',
'delta.universalFormat.enabledFormats' = 'iceberg')
AS
SELECT * FROM source_table;
Please refer to the documentation on pre-requisites, configs to use and limitations associated with using uniform https://docs.databricks.com/en/delta/uniform.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-09-2025 06:33 PM
It looks like Databricks making things difficult to use iceberg tables. There is no clear online documentation or steps provided to use with plain spark & spark sql, and the errors thrown in the Databricks environment are very cryptic.
They wanted to make things difficult for the customers

