Thanks for your question!
Although it shouldn't be necessary, could you please try the following:
-
Set the spark.databricks.sql.initial.catalog.name
configuration to my_catalog
in your Spark session to ensure the correct catalog is initialized.
-
Use current_catalog()
to print the active catalog before executing the query. This will help verify the catalog in use.
-
Print the query's explain()
plan in extended mode and check if any other catalog is being referenced. Even though you're using the three-level notation, this step can help confirm there are no unintended catalog overrides.
Let me know if these steps reveal any discrepancies or require further exploration!