cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Data bricks is not mounting with storage account giving java lang exception error 480

Sarathk
New Contributor

Hi Everyone,

I am currently facing an issue with in our Test Environment where Data bricks is not able to mount with the storage account and we are using the same mount in other environments those are Dev,Preprod and Prod and it works fine there without any errors.

I have validated below settings

Fyi,  We are using a secret scope in Data bricks and from there we have been fetching these details through a keyvault using the same SP details across all environments and the secret is updated and in sync.

Required STORAGE BLOB DATA CONTRIBUTOR ROLE is assigned to the SP in  Storage account.

Network is not restricted and it got ENABLED for all public networks and for the Cluster.

We have created a new SP and tried with it no luck.

SOFT DELETION is enabled for the storage account for all our storage accounts in other environments and it is working fine.

Can someone please through some light on this issue

Still getting following authentication java lang exception error 480

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions

mark_ott
Databricks Employee
Databricks Employee

This issue in your Test environment, where Databricks fails to mount an Azure Storage account with the error java.lang.Exception: 480, is most likely related to expired credentials or cached authentication tokens, even though the same configuration works in Dev, Preprod, and Prod.

Based on technical documentation and recent Databricks/Microsoft support discussions , here are the most probable causes and solutions:​


Potential Causes

  1. Expired or Invalid Client Secret
    The Service Principal (SP) secret configured in Azure Key Vault might have expired or become invalid in the Test workspace context. Even if the same secret is updated across environments, cached mounts in Databricks can hold old tokens, causing authentication to fail.​

  2. Stale or Conflicting Mount
    Previous mounts in the Test environment might still be holding old credentials. Databricks does not automatically refresh tokens for existing mounts, so an invalid cached credential could trigger the 480 error.​

  3. Configuration or Permission Drift
    The SP or role assignment may appear identical but differ slightly due to propagation delays or hidden policy differences between subscriptions. Double-check the IAM scope—ensure the Storage Blob Data Contributor role is applied at the storage account level, not just at the container level.​

  4. Soft Delete or Hierarchical Namespace Issues
    In rare cases, the soft delete feature can interfere with mounting operations if remnants of a previous mount container still exist and conflict with current tokens. This edge case has been observed when remounting after a cluster or workspace reset.​

  5. Cluster Token Expiry
    If the Databricks cluster was running for an extended period, its internal access token used to reach Azure AD might have expired. Restarting the cluster refreshes this token and can often fix transient authentication exceptions.​


Recommended Resolution Steps

Follow these actions in order:

  1. Recreate Client Secret (if older than 90 days)

    • In Azure Entra ID, generate a new secret for the Service Principal.

    • Update the secret in your Key Vault and refresh it in your Databricks Secret Scope.

  2. Unmount and Remount the Path

    • Run:

      python
      dbutils.fs.unmount("/mnt/<your-mount-path>") dbutils.fs.refreshMounts()
    • Then remount using updated credentials:

      python
      dbutils.fs.mount( source="abfss://<container>@<storage-account>.dfs.core.windows.net/", mount_point="/mnt/<your-mount-path>", extra_configs={ "fs.azure.account.auth.type.<storage-account>.dfs.core.windows.net": "OAuth", "fs.azure.account.oauth.provider.type.<storage-account>.dfs.core.windows.net": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider", "fs.azure.account.oauth2.client.id.<storage-account>.dfs.core.windows.net": dbutils.secrets.get(scope="<scope>", key="sp-client-id"), "fs.azure.account.oauth2.client.secret.<storage-account>.dfs.core.windows.net": dbutils.secrets.get(scope="<scope>", key="sp-client-secret"), "fs.azure.account.oauth2.client.endpoint.<storage-account>.dfs.core.windows.net": "https://login.microsoftonline.com/<tenant-id>/oauth2/token" } )
  3. Restart the Databricks Cluster
    This refreshes any stale authentication caches.​

  4. Validate Connectivity

    • Test using Azure Storage Explorer with the same SP credentials.

    • Verify the Databricks workspace subnet (if using managed VNet) can access your storage endpoint.

  5. Avoid Nested Mount Points
    Ensure you're not creating a mount inside another mount, which can throw authentication or path-related exceptions.​


If the issue persists after these steps, enabling Databricks debug logs (spark.databricks.service.server.logLevel=DEBUG) can help identify whether token acquisition or blob authentication is failing specifically.


In most reported 480 cases from 2025 , unmounting and remounting with a refreshed SP secret and restarting the cluster resolved the issue fully in test or QA environments.

View solution in original post

2 REPLIES 2

NandiniN
Databricks Employee
Databricks Employee

Checking.

mark_ott
Databricks Employee
Databricks Employee

This issue in your Test environment, where Databricks fails to mount an Azure Storage account with the error java.lang.Exception: 480, is most likely related to expired credentials or cached authentication tokens, even though the same configuration works in Dev, Preprod, and Prod.

Based on technical documentation and recent Databricks/Microsoft support discussions , here are the most probable causes and solutions:​


Potential Causes

  1. Expired or Invalid Client Secret
    The Service Principal (SP) secret configured in Azure Key Vault might have expired or become invalid in the Test workspace context. Even if the same secret is updated across environments, cached mounts in Databricks can hold old tokens, causing authentication to fail.​

  2. Stale or Conflicting Mount
    Previous mounts in the Test environment might still be holding old credentials. Databricks does not automatically refresh tokens for existing mounts, so an invalid cached credential could trigger the 480 error.​

  3. Configuration or Permission Drift
    The SP or role assignment may appear identical but differ slightly due to propagation delays or hidden policy differences between subscriptions. Double-check the IAM scope—ensure the Storage Blob Data Contributor role is applied at the storage account level, not just at the container level.​

  4. Soft Delete or Hierarchical Namespace Issues
    In rare cases, the soft delete feature can interfere with mounting operations if remnants of a previous mount container still exist and conflict with current tokens. This edge case has been observed when remounting after a cluster or workspace reset.​

  5. Cluster Token Expiry
    If the Databricks cluster was running for an extended period, its internal access token used to reach Azure AD might have expired. Restarting the cluster refreshes this token and can often fix transient authentication exceptions.​


Recommended Resolution Steps

Follow these actions in order:

  1. Recreate Client Secret (if older than 90 days)

    • In Azure Entra ID, generate a new secret for the Service Principal.

    • Update the secret in your Key Vault and refresh it in your Databricks Secret Scope.

  2. Unmount and Remount the Path

    • Run:

      python
      dbutils.fs.unmount("/mnt/<your-mount-path>") dbutils.fs.refreshMounts()
    • Then remount using updated credentials:

      python
      dbutils.fs.mount( source="abfss://<container>@<storage-account>.dfs.core.windows.net/", mount_point="/mnt/<your-mount-path>", extra_configs={ "fs.azure.account.auth.type.<storage-account>.dfs.core.windows.net": "OAuth", "fs.azure.account.oauth.provider.type.<storage-account>.dfs.core.windows.net": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider", "fs.azure.account.oauth2.client.id.<storage-account>.dfs.core.windows.net": dbutils.secrets.get(scope="<scope>", key="sp-client-id"), "fs.azure.account.oauth2.client.secret.<storage-account>.dfs.core.windows.net": dbutils.secrets.get(scope="<scope>", key="sp-client-secret"), "fs.azure.account.oauth2.client.endpoint.<storage-account>.dfs.core.windows.net": "https://login.microsoftonline.com/<tenant-id>/oauth2/token" } )
  3. Restart the Databricks Cluster
    This refreshes any stale authentication caches.​

  4. Validate Connectivity

    • Test using Azure Storage Explorer with the same SP credentials.

    • Verify the Databricks workspace subnet (if using managed VNet) can access your storage endpoint.

  5. Avoid Nested Mount Points
    Ensure you're not creating a mount inside another mount, which can throw authentication or path-related exceptions.​


If the issue persists after these steps, enabling Databricks debug logs (spark.databricks.service.server.logLevel=DEBUG) can help identify whether token acquisition or blob authentication is failing specifically.


In most reported 480 cases from 2025 , unmounting and remounting with a refreshed SP secret and restarting the cluster resolved the issue fully in test or QA environments.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now