cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Set default database thru Cluster Spark Configuration

adrianhernandez
New Contributor III

Set the default catalog (AKA default SQL Database) in a Cluster's Spark configuration. I've tried the following :

spark.catalog.setCurrentDatabase("cbp_reporting_gold_preprod") - this works in a Notebook but doesn't do anything in the Cluster.

spark.sql.catalog.spark_catalog.defaultDatabase("cbp_reporting_gold_preprod")

In the Spark config I am entering a slightly different syntax (w/o the parenthesis or quotes). Error logs do not show errors running these commands, but, they simply do not work. I use following command in a notebook to test for this :

spark.catalog.currentDatabase()

The end goal is to set several options in the Cluster so user's can simply query their data using Spark SQL and not be concerned with the database where their tables are located. I've googled extensively for days, have not found a solution yet. Wondering why this works in a notebook spark.catalog.setCurrentDatabase("cbp_reporting_gold_preprod"), but trying this in the Spark Configuration spark.catalog.setCurrentDatabase cbp_reporting_gold_preprod doesn't seem to do anything.

1 REPLY 1

adrianhernandez
New Contributor III

I've tried different commands in the Cluster's Spark Config, none work, they execute at Cluster startup w/o any errors shown in the logs, but once you run a notebook attached to the cluster Default catalog is still set to 'default'.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now