cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

delta lake in Apache Spark

asethia
New Contributor

Hi,

As per documentation https://docs.delta.io/latest/quick-start.html , we can configure DeltaCatalog using spark.sql.catalog.spark_catalog.

The Iceberg supports two Catalog implementations (https://iceberg.apache.org/docs/latest/spark-configuration/#catalogs):

  • Replacing the session catalog (spark_catalog) - using org.apache.iceberg.spark.SparkSessionCatalog , It adds support for Iceberg tables to Sparkโ€™s built-in catalog, and delegates to the built-in catalog for non-Iceberg tables
  • Custom Catalog - org.apache.iceberg.spark.SparkCatalog - supports a Hive Metastore or a Hadoop warehouse as

Do we have an option similar to Iceberg in Delta Lake; where we can configure a custom catalog?

1 REPLY 1

Anonymous
Not applicable

@Arun Sethiaโ€‹ :

Yes, Delta Lake also supports custom catalogs. Delta Lake uses the Spark Catalog API, which allows for pluggable catalog implementations. You can implement your own custom catalog to use with Delta Lake.

To use a custom catalog, you can set the configuration property spark.sql.catalog.my_custom_catalog to the fully-qualified name of your custom catalog implementation. Then you can use Delta tables as usual by specifying the catalog and database in the table identifier, like so: my_custom_catalog.my_database.my_table.

Here's an example of how to create a custom catalog implementation for Delta Lake:

from pyspark.sql.catalog import Catalog
 
class MyCustomCatalog(Catalog):
    def __init__(self, spark_session):
        super().__init__(spark_session)
        # implementation details for your custom catalog
 
# set configuration property to use your custom catalog
spark.conf.set("spark.sql.catalog.my_custom_catalog", "com.example.MyCustomCatalog")
 
# use Delta tables with your custom catalog
df = spark.read.format("delta").table("my_custom_catalog.my_database.my_table")

In the above example, MyCustomCatalog is a custom implementation of the Catalog class provided by Spark, and spark.sql.catalog.my_custom_catalog is set to the fully-qualified name of that implementation. Then you can use Delta tables as usual, but with the custom catalog specified in the table identifier.

Hope this helps you to figure out your solution!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.