Set default database thru Cluster Spark Configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-02-2023 12:11 PM
Set the default catalog (AKA default SQL Database) in a Cluster's Spark configuration. I've tried the following :
spark.catalog.setCurrentDatabase("cbp_reporting_gold_preprod") - this works in a Notebook but doesn't do anything in the Cluster.
spark.sql.catalog.spark_catalog.defaultDatabase("cbp_reporting_gold_preprod")
In the Spark config I am entering a slightly different syntax (w/o the parenthesis or quotes). Error logs do not show errors running these commands, but, they simply do not work. I use following command in a notebook to test for this :
spark.catalog.currentDatabase()
The end goal is to set several options in the Cluster so user's can simply query their data using Spark SQL and not be concerned with the database where their tables are located. I've googled extensively for days, have not found a solution yet. Wondering why this works in a notebook spark.catalog.setCurrentDatabase("cbp_reporting_gold_preprod"), but trying this in the Spark Configuration spark.catalog.setCurrentDatabase cbp_reporting_gold_preprod doesn't seem to do anything.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2023 10:30 AM
I've tried different commands in the Cluster's Spark Config, none work, they execute at Cluster startup w/o any errors shown in the logs, but once you run a notebook attached to the cluster Default catalog is still set to 'default'.