cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

SQL Warehouse Serverless Endpoint Error

bozhu
Contributor

Our SQL Warehouse Serverless Endpoint started failing from this morning (2022-08-23 18:00:00 UTC):

org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Unable to build AWSGlueClient: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.)

I am 100% certain nothing changed from our ends in both AWS and Databricks.

Since all our workflows and DLTs are still running fine and all Databricks services/clusters are using the same instance profile with the same glueCatalog setting, I believe Databricks’ “Serverless Enpoints” are broken because I also fired up a “Classic” SQL Warehouses endpoint and everything worked as expected.

1 ACCEPTED SOLUTION

Accepted Solutions

User16873043099
Contributor

This appears to be occurring due to  IMDSv2 enabled in the workspace.

https://docs.databricks.com/administration-guide/cloud-configurations/aws/imdsv2.html#migrate

To fix this, can you please try adding the below spark config under SQL Admin Console > Data access Config where you have the glue settings. This will restart the endpoints in the workspace, so please do it off hours.

spark.databricks.hive.metastore.glueCatalog.isolation.enabled false

View solution in original post

5 REPLIES 5

User16873043099
Contributor

This appears to be occurring due to  IMDSv2 enabled in the workspace.

https://docs.databricks.com/administration-guide/cloud-configurations/aws/imdsv2.html#migrate

To fix this, can you please try adding the below spark config under SQL Admin Console > Data access Config where you have the glue settings. This will restart the endpoints in the workspace, so please do it off hours.

spark.databricks.hive.metastore.glueCatalog.isolation.enabled false

A few points to be made clear here:

  • Workspace setting of "Enforce AWS Instance Metadata Service V2 for all clusters" has always been disabled
  • Again, like I described in my original post, no changes made to AWS or Databricks, Serverless SQL Warehoues just suddenly stopped working with the error
  • THe spark key "spark.databricks.hive.metastore.glueCatalog.isolation.enabled" is not even recognised in "SQL Admin Console" as per the attached screenshot

image

Vidula
Honored Contributor

Hi @Bo Zhu​ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

It was resolved by itself the next day, and apparently it was related to some bugs which hopefully Databricks would crush them soon and improve the whole serverless service

Kaniz_Fatma
Community Manager
Community Manager

Hi @Bo Zhu​, Indeed. Thanks for the amazing feedback!

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!