- 75 Views
- 1 replies
- 0 kudos
Resolved! Removing compute policy permissions using Terraform
By default, the "users" and "admins" groups have CAN_USE permission on the Personal Compute policy.I'm using Terraform and would like to prevent regular users from using this policy to create additional compute clusters.I haven't found a way to do th...
- 75 Views
- 1 replies
- 0 kudos
- 0 kudos
I learned the Personal Compute policy can be turned off at the account level:https://learn.microsoft.com/en-us/azure/databricks/admin/clusters/personal-compute#manage-policy
- 0 kudos
- 415 Views
- 4 replies
- 0 kudos
Resolved! Shall we opt for multiple worker nodes in dab workflow template if our codebase is based on pandas.
Hi team, I am working in a databricks asset bundle architecture. Added my codebase repo in a workspace. My question to do we need to opt for multiple worker nodes like num_worker_nodes > 1 or autoscale with range of worker nodes if my codebase has mo...
- 415 Views
- 4 replies
- 0 kudos
- 0 kudos
Thanks @Shua42 . You really helped me a lot.
- 0 kudos
- 60 Views
- 0 replies
- 0 kudos
Static IP for existing workspace
Is there a way to have static IP addresses for Azure Databricks without creating new workspace?We have worked a lot in 2 workspaces (dev and main), but now we need static IP addresses for both to work with some APIs. Do we really have to recreate the...
- 60 Views
- 0 replies
- 0 kudos
- 703 Views
- 6 replies
- 1 kudos
How to install (mssql) drivers to jobcompute?
Hello, I'm having this issue with job-computes:The snippet of the code is as follows: 84 if self.conf["persist_to_sql"]: 85 # persist to sql 86 df_parsed.write.format( 87 "com.microsoft.sqlserver.jdbc.spark" 88...
- 703 Views
- 6 replies
- 1 kudos
- 1 kudos
For a job compute, you would have to go init script route. Can you please highlight, the cause of the failure of library installation via init script?
- 1 kudos
- 1233 Views
- 1 replies
- 0 kudos
How to upload a file to Unity catalog volume using databricks asset bundles
Hi,I am working on a CI CD blueprint for developers, using which developers can create their bundle for jobs / workflows and then create a volume to which they will upload a wheel file or a jar file which will be used as a dependency in their noteboo...
- 1233 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi Venugopal, I am having similar requirement. Did you get any solution to handle this?
- 0 kudos
- 694 Views
- 2 replies
- 1 kudos
Terraform - Azure Databricks workspace without NAT gateway
Hi all,I have experienced an increase in costs - even when not using Databricks compute.It is due to the NAT-gateway, that are (suddenly) automatically deployed.When creating Azure Databricks workspaces using Terraform:A NAT-gateway is created. When ...
- 694 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi,Unfortunately, you need to explicitly define each resource of the non-NAT-gateway pattern, if you want to replicate the setup as it is deployed using Azure portal. For me, the following TF declaration did the job:provider "azurerm" { features {}...
- 1 kudos
- 157 Views
- 1 replies
- 0 kudos
Azure Databricks Outage - Incident ES-1463382 - Any Official RCA Available?
We experienced service disruptions on May 15th, 2025, related to Incident ID ES-1463382.Could anyone from Databricks share the official Root Cause Analysis (RCA) or point us to the correct contact or channel to obtain it?Thanks in advance!
- 157 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @angiezz! Please raise a support ticket with the Databricks Support Team. They will be able to provide you with the official documentation regarding this incident.
- 0 kudos
- 97 Views
- 1 replies
- 0 kudos
Databricks Cluster Downsizing time
Hello,It seems that the cluster downsizing at our end is occurring rapidly: sometimes the workers go from 5-3 in mere 2 minutes!Is that normal? Can I do something to increase this downsizing time?- Jahanzeb
- 97 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @jahanzebbehan, unfortunately, I don't believe there is any way to decrease the downsizing time for a cluster, as this mostly happens automatically depending on workload volume.Here are some helpful links on autoscaling:https://community.databrick...
- 0 kudos
- 503 Views
- 5 replies
- 0 kudos
Databricks Serverless Job : sudden random failure
Hi, I've been running a job on Azure Databricks serverless, which just does some batch data processing every 4 hours. This job, deployed with bundles has been running fine for weeks, and all of a sudden, yesterday, it started failing with an error th...
- 503 Views
- 5 replies
- 0 kudos
- 0 kudos
Hey @thibault , Glad to hear it is working again. I don't see any specific mention of a bug internally that would be related to this, but it is likely that it was due to a change in the underlying runtime for serverless compute. This may be one of th...
- 0 kudos
- 305 Views
- 2 replies
- 0 kudos
azure databricks automatic user provisioning via terraform
Hi community, Azure databricks recently announced a new user management feature (now in public preview) called automatic-identity-management , which allows Azure databricks to access Azure Entra ID directly and grant users and groups permissions and ...
- 305 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi , I think The automatic identity management feature provisions Azure Entra ID users and groups directly into Databricks. However, Terraform's databricks_group and databricks_group_member resources are designed for managing groups and memberships w...
- 0 kudos
- 521 Views
- 1 replies
- 0 kudos
workflow not pickingup correct host value (While working with MLflow model registry URI)
Exception: mlflow.exceptions.MlflowException: An API request to https://canada.cloud.databricks.com/api/2.0/mlflow/model-versions/list-artifacts failed due to a timeout. The error message was: HTTPSConnectionPool(host='canada.cloud.databricks.com', p...
- 521 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @Dharma25! It looks like this post duplicates one you shared earlier. A response has already been provided in the Original thread. I recommend continuing the discussion there to keep the conversation focused and organized.
- 0 kudos
- 355 Views
- 4 replies
- 2 kudos
Azure databricks workspace Power BI connector type
In Power BI, there's an "Azure Databricks workspace" connector which, unlike the "Azure Databricks" connector, allows you to connect using a service principal defined in Azure Entra ID (rather than within Databricks).While I can create this connector...
- 355 Views
- 4 replies
- 2 kudos
- 2 kudos
The "Azure Databricks Workspace" connector in Power BI allows authentication using a service principal from Azure Entra ID, providing more secure and scalable access management compared to the traditional personal token-based "Azure Databricks" conne...
- 2 kudos
- 202 Views
- 0 replies
- 0 kudos
[Azure Databricks]: Use managed identity to access mlflow models and artifacts
Hello! I am new to Azure Databricks and have a question: In my current setup, I am running some containerized python code within an azure functions app. In this code, I need to download some models and artifacts stored via mlflow in our Azure Databri...
- 202 Views
- 0 replies
- 0 kudos
- 230 Views
- 1 replies
- 0 kudos
New default notebook format (IPYNB) causes unintended changes on release
Dear Databricks,We have noticed the following issue since the new default notebook format has been set to IPYNB. When we release our code from (for example) DEV to TST using a release pipeline built in Azure DevOps, we see unintended changes popping ...
- 230 Views
- 1 replies
- 0 kudos
- 0 kudos
Seems like something went wrong with attaching the screenshot. So here we go.
- 0 kudos
- 6698 Views
- 4 replies
- 2 kudos
Internal error. Attach your notebook to a different compute or restart the current compute. java.lan
Internal error. Attach your notebook to a different compute or restart the current compute.java.lang.RuntimeException: abort: DriverClient destroyed at com.databricks.backend.daemon.driver.DriverClient.$anonfun$poll$3(DriverClient.scala:577) at scala...
- 6698 Views
- 4 replies
- 2 kudos
- 2 kudos
The error is caused due to overlap of connectors or instance, if you see an error as below: And you can see Multiple clusters with same name, which is caused due to running the notebook_1 under a cluster attached to it and re-running a notebook_2 wit...
- 2 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
1 -
Apache spark
1 -
AWS
5 -
Azure
7 -
Azure databricks
5 -
Billing
2 -
Cluster
1 -
Compliance
1 -
Data Ingestion & connectivity
5 -
Databricks Runtime
1 -
Databricks SQL
2 -
DBFS
1 -
Dbt
1 -
Delta
4 -
Delta Sharing
1 -
DLT Pipeline
1 -
GA
1 -
Gdpr
1 -
Github
1 -
Partner
16 -
Public Preview
1 -
Service Principals
1 -
Unity Catalog
1 -
Workspace
2
- « Previous
- Next »
User | Count |
---|---|
43 | |
33 | |
25 | |
17 | |
10 |