Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
Showing results for 
Search instead for 
Did you mean: 

Cluster with shared access mode cannot query metastore

New Contributor III


I have created a new UC enabled metastore using an Azure storage account and container connected to a Databricks workspace using an access connector. At first glance everything seems to work. 

I encounter a problem, however, when I try to query UC using a shared access mode cluster. I get an error, 

`org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient`

which seems to be caused by the exception, `SparkConnectGrpcException`, which gets thrown by the SparkConnectClient after informing me that it "cannot invoke RPC" and and that "closed" is contained in the grpc.RpsError. 

What am I doing wrong, and how can I fix it?  



- There is no problem, when I use SQL Warehouses or single mode access clusters.

- I have a metastore in a different region (West Europe) that doesn't experience this problem  

- The problematic workspace/metastore is located in North Europe


Community Manager
Community Manager

Hi @almIt appears that you’re encountering issues related to the Hive metastore client when querying UC (Unified Catalog) using a shared access mode cluster in your Databricks workspace.

Let’s troubleshoot this step by step:

  1. Hive Metastore Initialization:

    • The error message you’re seeing, org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient, suggests that the Hive metastore client is not being properly initialized.
    • Ensure that you have correctly set up and initialized the Metastore database for Hive. Post installation and configuration of Hive, initializing the Metastore database with the chosen dat...1.
    • If you haven’t already, verify that the Hive Metastore service is running and properly configured.
  2. SparkConnectGrpcException:

    • The exception SparkConnectGrpcException indicates an issue with the Spark Connect service, which is used for communication between Databricks and the Hive metastore.
    • Let’s explore potential solutions:
  3. Check Network Connectivity:

    • Ensure that there are no network-related issues between your Databricks workspace and the Hive metastore.
    • Verify that the necessary ports (such as 9083 for Hive Metastore) are accessible and not blocked by firewalls or network policies.
  4. Access Control and Permissions:

    • Confirm that the shared access mode cluster has the appropriate permissions to interact with the Hive metastore.
    • Check if the access connector configuration is correctly set up to allow communication between Databricks and the metastore.
  5. Region-Specific Configuration:

    • Since you mentioned that the problematic workspace/metastore is located in North Europe, consider region-specific configurations:
      • Ensure that the Azure storage account and container are accessible from the North Europe region.
      • Verify that the access connector settings are consistent with the region where your metastore is located.
  6. Compare with Working Metastore (West Europe):

    • Since you have a working metastore in West Europe, compare its configuration with the problematic one in North Europe.
    • Check for any differences in network settings, access control, and connectivity.
  7. Logs and Diagnostics:

    • Review the logs in Databricks and the Hive metastore to identify any additional error messages or warnings.
    • Look for specific details related to the SparkConnectGrpcException and the underlying cause.
  8. Restart Metastore Service:

    • Sometimes, restarting the metastore service can resolve transient issues.
    • In your Databricks Notebook, try running the following command to restart the metastore service:
      %sh sudo service hive-metastore restart
  9. Consult Databricks Community and Documentation:

    • Visit the Databricks Community to search for similar issues or ask for assistance.
    • Refer to the official Databricks documentation for detailed guidance on configuring Spark Connect and Hive metastore.

If you encounter any specific error messages or need further assistance, don’t hesitate to ask. Good luck! 🚀

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!