Azure Databricks databricks-cli authentication with M2M using environment variables
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thursday
Which environment variables do I have to set to use the databricks-cli with m2m oauth using Microsoft Entra ID managed service principals? I already added the service principal to the workspace.
I found the following documentation, but I am still confused why it does not work:
https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/azure-sp
I set the env vars DATABRICKS_HOST, ARM_TENANT_ID, ARM_CLIENT_ID and ARM_CLIENT_SECRET, but when I try to run some command, I still get the error:
Error: InvalidConfigurationError: You haven't configured the CLI yet! Please configure by entering `/home/airflow/.local/bin/databricks configure`
I will use env vars instead of config files as it should be part of an airflow CI/CD pipeline
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thursday
I would suggest to create .databrickscfg profile ,best practice and most easy
You will need to create it in your ~ directory vim ~/.databrickscfg
inside the file you can define multiple profiles like this
https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/azure-sp
# For account level access
[<some-unique-configuration-profile-name>]
host = <account-console-url>
account_id = <account-id>
azure_tenant_id = <azure-service-principal-tenant-id>
azure_client_id = <azure-service-principal-application-id>
azure_client_secret = <azure-service-principal-client-secret>
# For workspace level access
[<some-unique-configuration-profile-name>]
host = <workspace-url>
azure_tenant_id = <azure-service-principal-tenant-id>
azure_client_id = <azure-service-principal-application-id>
azure_client_secret = <azure-service-principal-client-secret>
After filling all these values you can test once whether your configurations are correctly set or not with this command
'databricks configure' all you profiles mentioned in your .databrickscfg will be checked whether their configuration is correct or not.
And then you can run any databricks cli command along with this global argument -p or --profile <profile-name>
Like databricks bundle deploy -p <profile-name>.
By default your can set your .databrickscfg file is set to ~ path but you can also specify it in any other directory for that you can set an environment variable. More info on it here : https://docs.databricks.com/aws/en/dev-tools/auth/unified-auth#config-profiles

