Hey Databricks forum,
Have been searching a lot, but can't find a solution. I have the following setup:
- a vnet connected to the databricks workspace with
- public-subnet (deligated to Microsoft.Databricks/workspaces) and a NSG
- private-subnet (deligated to Microsoft.Databricks/workspaces) and a NSG
- private endpoints subnet
- Azure Data Lake Storage Gen2 with Hierarchical Namespace enabled
- has a container 'metastore'
- Public network access is disabled
- Added private endpoint
- Target sub-resource is dfs
- to the private endpoints subnet
- added dns zone privatelink.dfs.core.windows.net with a A record [name of storage resource] and also a vnet link to the vnet.
- Unity-catalog-access-connector with managed identity
- with RBAC permissions with Storage Blob Data Contributor on the storage.
If I add a new external location with the following input:
- Storage credential - (refrenced with the ResourceId of the Unity-catalog-access-connector)
- abfss://metastore@[name of storage resource].privatelink.dfs.core.windows.net
I recieve the message:
> Failed to access cloud storage: [AbfsRestOperationException] () exceptionTraceId=ff1075e9-00d9-44b6-a602-9d7c19fbae9b
When I give the storage a public-ip and set 'staimzdatabricks.dfs.core.windows.net' it succeeds.
If I search for this exception, I get the response this could be network or permission related.
I run from the Databricks workspace a compute a nslookup for [name of storage resource].privatelink.dfs.core.windows.net:
>Server: 168.63.129.16
> Address: 168.63.129.16#53
>
> Non-authoritative answer:
> Name: [name of storage resource].privatelink.dfs.core.windows.net
> Address: 10.1.1.4
Do you have an I idea what I am missing here?