I need to to set up Iceberg tables in Databricks environment, but the data resides in an S3 bucket. Then read these tables by running SQL queries.
Databricks environment has access to S3. This is done by
- setting up the access by mapping the Instance Profile to the compute cluster
- AWS access key and secret key are used to connect via Spark code.
Note: Unity catalog has been enabled in our environment.
Access to S3 from databricks environment was tested by copying from S3 into DBFS. This operation was successful.
Tried to create Iceberg tables in running SQL commands from SQL Editor and from Databricks notebook environment by running Python code and calling spark.sql()
However, we were unsuccessful in setting up Icebergs.
When PySpark code was run to create iceberg table by providing the location of S3 and access key and secret key, encountered an error โData source format iceberg is not supported in Unit Catalogโ See below screenshot.
When the code was run against Hive metastore
I got a java exception โIceberg is not valid Spark SQL data sourceโ
Also, we tried iceberg and apache-iceber Python packages. That did not work as well.
Tried many things from various tech foruns including Demio and Community.databricks.com, but in vain.
References used:
https://www.dremio.com/blog/getting-started-with-apache-iceberg-in-databricks/
https://community.databricks.com/t5/data-engineering/reading-iceberg-table-present-in-s3-from-databr...
Cluster configurations:
What support I need from Databricks community?
- Detailed and specific steps to create Iceberg table and point to data in S3 via SQL or Pyspark code.
- List of libraries to attach to Compute resource, Spark variables and Environment variables to set.
- Configuration required on SQL Compute resource
- List of Python libraries required and location of repository.