Hi @Retired_mod ,
thank you for your reply!
> Unity Catalog Configuration
We configured the metastore, workspace and catalog to our best knowledge and Databricks' documentation. The DB runtime and AWS itself should be fully supported.
> Metastore H ealth (Consider restarting or verifying the h ealth of the metastore service)
AFAIK, the the error message is related to the legacy HIVE metastore at mdv2llxgl8lou0.ceptxxgorjrc.eu-central-1.rds.amazonaws.com address which is hosted and maintained centrally by Databricks. Nothing we can do here.
> Storage Credentials and Locations
Testing the external location that "should" hold the unity catalog data via the data catalog Web-UI shows: "All Permissions Confirmed. The associated Storage Credential grants permission to perform all necessary operations." We successfully use the same storage creds for external volumes on the same S3 bucket (though, different sub-folder.
> Connection Troubleshooting
I'm not sure how we can set any credentials for the legacy hive metastore. Shouldn't this be fully managed by Databricks (via keystore)?
> Since you can reach the URL/port via the web terminal, consider checking the security group rules and firewall settings
With respect to group rules and firewall, is there a difference between making a network connection via Spark (JVM) and via bash / python if it is the very same VM/Container? I can also sucessfully create a socket with Python or shell (%sh) within a Databricks Notebook.
> Metadata Storage Location
I fully agree that "understanding its behavior is crucial" but apparently I'm missing something here. I just created a catalog and set its storage root to an S3 directory, where the GUI shows that we have full access.
I replied to your potential solutions, I hope this clears things up a bit.
Thanks!