Hi @adrianhernandez , Based on the information provided, to set the default catalog (default SQL Database) in a Cluster's Spark configuration, you can use the Spark configuration property spark.databricks.sql.initial.catalog.name
. This configuration property allows you to override the default catalog for a specific cluster.
Here is how you can set this configuration:
python
spark.conf.set("spark.databricks.sql.initial.catalog.name", "cbp_reporting_gold_preprod")
Keep in mind that this configuration needs to be set before starting the SparkSession.
Also, remember that changing the default catalog can break existing data operations that depend on it.