cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks CLI token creation fails with “cannot configure default credentials”

gannicus
Visitor
Hello, I have been generating a Databricks personal access token in my YAML-based CI pipeline using a bash script. The pipeline installs the Databricks CLI and then creates a token using a Service Principal (Azure AD application) credentials.

Current working approach (previously working)

#!/bin/bash

dbx_host="${1}"
dbx_client_id="${2}"
dbx_client_secret="${3}"

# Set the Environment Variables for Databricks authentication
export DATABRICKS_HOST=$dbx_host
export DATABRICKS_CLIENT_ID=$dbx_client_id
export DATABRICKS_CLIENT_SECRET=$dbx_client_secret

echo "Creating a new Databricks token"

response=$(databricks tokens create \
  --lifetime-seconds 31536000 \
  --comment "Token for SPN for EDH Data Access. Validity 1 year.")

echo "Token Created Successfully"

token=$(echo $response | jq -r '.token_value')
token_id=$(echo $response | jq -r '.token_info.token_id')
expiry_time=$(echo $response | jq -r '.token_info.expiry_time')

This used to work fine for generating tokens.

Issue: Recently, the same pipeline started failing with the following error:

Error: default auth: cannot configure default credentials, please check https://docs.databricks.com/en/dev-tools/auth.html#databricks-client-unified-authentication to configure credentials for your preferred authentication method.

Config: host=https://***, account_id=***, workspace_id=***, profile=DEFAULT, azure_tenant_id=***, client_id=***, client_secret=***

Env: DATABRICKS_HOST, DATABRICKS_CLIENT_ID, DATABRICKS_CLIENT_SECRET

The documentation link provided in the error message does not really help in identifying what exactly needs to be changed or how to fix this specific CI/CD use case.

Has there been a recent change in Databricks CLI authentication (especially unified authentication) that breaks Service Principal authentication using DATABRICKS_CLIENT_ID and DATABRICKS_CLIENT_SECRET environment variables?

Any guidance or migration steps would be appreciated.

 
1 REPLY 1

emma_s
Databricks Employee
Databricks Employee

Hi,

I'm pretty sure what you're hitting is stricter auth detection in the newer CLI/SDK. Your error shows azure_tenant_id, client_id, and client_secret all populated, so it's seeing more than one credential type and refusing to guess between them.

The fix is to set DATABRICKS_AUTH_TYPE explicitly so it knows which method to use. Worth also tracing where azure_tenant_id is coming from, your script doesn't set it, so it's leaking in from .databrickscfg, the runner env, or an earlier step. That'll tell you whether you actually want the Azure AD path or the Databricks-managed OAuth M2M path.

The auth type values and required env vars for each are documented here:

Databricks CLI auth methods — https://learn.microsoft.com/en-us/azure/databricks/dev-tools/cli/authentication
OAuth M2M for service principals — https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/oauth-m2m

One thing worth raising: do you actually need to mint a 1-year PAT in the pipeline? OAuth M2M tokens auto-issue and refresh hourly, so most CI patterns can authenticate the SP directly each run and skip the token-creation step. Avoids the long-lived-token-in-pipeline-state problem and this whole class of CLI-version fragility. Fair to keep the PAT if something downstream can only consume one.

I hope that helps.


Thanks,

Emma