cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Default schema in SQL Editor is not 'default' when unity catalog is set as default catalog

gyorgyjelinek
New Contributor

In workspace settings: Workspace admin - advanced - other - Default catalog for the workspace is set to different than hive_metastore, it is set to a `Unity Catalog` catalog - the expected behaviour is copied here from the related more info panel:

"Setting the default catalog for the workspace determines the catalog that is used when queries do not reference a fully qualified 3 level name. For example, if the default catalog is set to 'retail_prod' then a query 'SELECT * FROM myTable' would reference the object 'retail_prod.default.myTable' (the schema 'default' is always assumed)."

In case of hive_metastore it is worked as expected when a new SQL Editor tab is opened. In case of Unity Catalog the new tab contains the Unity Catalog as the default catalog (as expected), but the default schema is not 'default', the first schema in the catalog in alphabetical order is the default - in my case it is called 'archive'.

Is there something I missed here - is there a setting I have to use for enforcing 'default' schema for being default in SQL Editor?

1 ACCEPTED SOLUTION

Accepted Solutions

Alberto_Umana
Databricks Employee
Databricks Employee

Hi @gyorgyjelinek,

It appears that the behavior you are experiencing is due to the default schema selection mechanism in Unity Catalog. When a new SQL Editor tab is opened, the default catalog is correctly set to the Unity Catalog you specified. However, instead of defaulting to the 'default' schema, it defaults to the first schema in alphabetical order, which in your case is 'archive'.

This behavior is consistent with the current implementation of Unity Catalog in Databricks. Unfortunately, there is no direct setting available to enforce the 'default' schema as the default schema in the SQL Editor when using Unity Catalog. The system defaults to the first schema in alphabetical order if no specific schema is mentioned.

To work around this, you can explicitly specify the schema in your queries or set the schema at the beginning of your SQL sessions using the `USE SCHEMA` command. For example:

USE SCHEMA default;

This command can be run at the start of your SQL session to ensure that the 'default' schema is used for subsequent queries.

Or simply use full path to the table for example:

SELECT * FROM catalog.schema.table

View solution in original post

1 REPLY 1

Alberto_Umana
Databricks Employee
Databricks Employee

Hi @gyorgyjelinek,

It appears that the behavior you are experiencing is due to the default schema selection mechanism in Unity Catalog. When a new SQL Editor tab is opened, the default catalog is correctly set to the Unity Catalog you specified. However, instead of defaulting to the 'default' schema, it defaults to the first schema in alphabetical order, which in your case is 'archive'.

This behavior is consistent with the current implementation of Unity Catalog in Databricks. Unfortunately, there is no direct setting available to enforce the 'default' schema as the default schema in the SQL Editor when using Unity Catalog. The system defaults to the first schema in alphabetical order if no specific schema is mentioned.

To work around this, you can explicitly specify the schema in your queries or set the schema at the beginning of your SQL sessions using the `USE SCHEMA` command. For example:

USE SCHEMA default;

This command can be run at the start of your SQL session to ensure that the 'default' schema is used for subsequent queries.

Or simply use full path to the table for example:

SELECT * FROM catalog.schema.table

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group