cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

oricaruso
by New Contributor II
  • 1017 Views
  • 1 replies
  • 0 kudos

Gcs databricks community

Hello,I would like to know if it is possible to connect my Databricks community account with a Google cloud storage account via a notebook.I tried to connect it via the json key of my gcs service account but the notebook always gives this error when ...

  • 1017 Views
  • 1 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 0 kudos

Hi @oricaruso To connect to GCS, you typically need to set the service account JSON key in the cluster’s Spark config, not just in the notebook. However, since the Community Edition has several limitations, like the absence of secret scopes, restrict...

  • 0 kudos
cmathieu
by New Contributor III
  • 2689 Views
  • 3 replies
  • 0 kudos

Multiple feature branches per user using Databricks Asset Bundles

I'm currently helping a team migrating to DABs from dbx and they would like to be able to work on multiple features at the same time.What I was able to do is pass the current branch as a variable in the root_path and various names, so when the bundle...

  • 2689 Views
  • 3 replies
  • 0 kudos
Latest Reply
cmilligan262
New Contributor II
  • 0 kudos

@cmathieu can you provide an example inserting the branch name? I'm trying to do the same thing

  • 0 kudos
2 More Replies
mzs
by Contributor
  • 1321 Views
  • 1 replies
  • 0 kudos

Resolved! Removing compute policy permissions using Terraform

By default, the "users" and "admins" groups have CAN_USE permission on the Personal Compute policy.I'm using Terraform and would like to prevent regular users from using this policy to create additional compute clusters.I haven't found a way to do th...

  • 1321 Views
  • 1 replies
  • 0 kudos
Latest Reply
mzs
Contributor
  • 0 kudos

I learned the Personal Compute policy can be turned off at the account level:https://learn.microsoft.com/en-us/azure/databricks/admin/clusters/personal-compute#manage-policy 

  • 0 kudos
harishgehlot
by New Contributor III
  • 1567 Views
  • 4 replies
  • 0 kudos

Resolved! Shall we opt for multiple worker nodes in dab workflow template if our codebase is based on pandas.

Hi team, I am working in a databricks asset bundle architecture. Added my codebase repo in a workspace. My question to do we need to opt for multiple worker nodes like num_worker_nodes > 1 or autoscale with range of worker nodes if my codebase has mo...

  • 1567 Views
  • 4 replies
  • 0 kudos
Latest Reply
harishgehlot
New Contributor III
  • 0 kudos

Thanks @Shua42 . You really helped me a lot.

  • 0 kudos
3 More Replies
angiezz
by New Contributor
  • 1440 Views
  • 1 replies
  • 0 kudos

Resolved! Azure Databricks Outage - Incident ES-1463382 - Any Official RCA Available?

We experienced service disruptions on May 15th, 2025, related to Incident ID ES-1463382.Could anyone from Databricks share the official Root Cause Analysis (RCA) or point us to the correct contact or channel to obtain it?Thanks in advance!

  • 1440 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Community Manager
  • 0 kudos

Hello @angiezz! Please raise a support ticket with the Databricks Support Team. They will be able to provide you with the official documentation regarding this incident.

  • 0 kudos
jahanzebbehan
by New Contributor
  • 534 Views
  • 1 replies
  • 0 kudos

Databricks Cluster Downsizing time

Hello,It seems that the cluster downsizing at our end is occurring rapidly: sometimes the workers go from 5-3 in mere 2 minutes!Is that normal? Can I do something to increase this downsizing time?- Jahanzeb

  • 534 Views
  • 1 replies
  • 0 kudos
Latest Reply
eniwoke
Contributor II
  • 0 kudos

Hi @jahanzebbehan, unfortunately, I don't believe there is any way to decrease the downsizing time for a cluster, as this mostly happens automatically depending on workload volume.Here are some helpful links on autoscaling:https://community.databrick...

  • 0 kudos
thibault
by Contributor III
  • 2815 Views
  • 5 replies
  • 0 kudos

Resolved! Databricks Serverless Job : sudden random failure

Hi, I've been running a job on Azure Databricks serverless, which just does some batch data processing every 4 hours. This job, deployed with bundles has been running fine for weeks, and all of a sudden, yesterday, it started failing with an error th...

thibault_0-1747205505731.png
  • 2815 Views
  • 5 replies
  • 0 kudos
Latest Reply
Shua42
Databricks Employee
  • 0 kudos

Hey @thibault , Glad to hear it is working again. I don't see any specific mention of a bug internally that would be related to this, but it is likely that it was due to a change in the underlying runtime for serverless compute. This may be one of th...

  • 0 kudos
4 More Replies
oktarinet
by New Contributor II
  • 4676 Views
  • 2 replies
  • 1 kudos

azure databricks automatic user provisioning via terraform

Hi community, Azure databricks recently announced a new user management feature (now in public preview) called automatic-identity-management , which allows Azure databricks to access Azure Entra ID directly and grant users and groups permissions and ...

  • 4676 Views
  • 2 replies
  • 1 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 1 kudos

Hi , I think The automatic identity management feature provisions Azure Entra ID users and groups directly into Databricks. However, Terraform's databricks_group and databricks_group_member resources are designed for managing groups and memberships w...

  • 1 kudos
1 More Replies
Dharma25
by New Contributor III
  • 2935 Views
  • 1 replies
  • 0 kudos

workflow not pickingup correct host value (While working with MLflow model registry URI)

Exception: mlflow.exceptions.MlflowException: An API request to https://canada.cloud.databricks.com/api/2.0/mlflow/model-versions/list-artifacts failed due to a timeout. The error message was: HTTPSConnectionPool(host='canada.cloud.databricks.com', p...

  • 2935 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Community Manager
  • 0 kudos

Hello @Dharma25! It looks like this post duplicates one you shared earlier. A response has already been provided in the Original thread. I recommend continuing the discussion there to keep the conversation focused and organized.

  • 0 kudos
Malthe
by Contributor III
  • 1837 Views
  • 4 replies
  • 2 kudos

Azure databricks workspace Power BI connector type

In Power BI, there's an "Azure Databricks workspace" connector which, unlike the "Azure Databricks" connector, allows you to connect using a service principal defined in Azure Entra ID (rather than within Databricks).While I can create this connector...

  • 1837 Views
  • 4 replies
  • 2 kudos
Latest Reply
sandeepmankikar
Contributor
  • 2 kudos

The "Azure Databricks Workspace" connector in Power BI allows authentication using a service principal from Azure Entra ID, providing more secure and scalable access management compared to the traditional personal token-based "Azure Databricks" conne...

  • 2 kudos
3 More Replies
amandaolens
by New Contributor III
  • 8139 Views
  • 4 replies
  • 2 kudos

Internal error. Attach your notebook to a different compute or restart the current compute. java.lan

Internal error. Attach your notebook to a different compute or restart the current compute.java.lang.RuntimeException: abort: DriverClient destroyed at com.databricks.backend.daemon.driver.DriverClient.$anonfun$poll$3(DriverClient.scala:577) at scala...

  • 8139 Views
  • 4 replies
  • 2 kudos
Latest Reply
LokeshManne
New Contributor III
  • 2 kudos

The error is caused due to overlap of connectors or instance, if you see an error as below: And you can see Multiple clusters with same name, which is caused due to running the notebook_1 under a cluster attached to it and re-running a notebook_2 wit...

  • 2 kudos
3 More Replies
littlewat
by New Contributor II
  • 3069 Views
  • 3 replies
  • 3 kudos

Resolved! Why catalog API does not include the catalog ID in the response?

Hi!I'm using Terraform(TF) to manage the Databricks resources.I would like to rename the Unity catalog using TF, but I could not. (similar issues have been reported for this:- https://github.com/databricks/terraform-provider-databricks/issues?q=is%3A...

  • 3069 Views
  • 3 replies
  • 3 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 3 kudos

I will pass your request along; however, there is nothing I can do to escalate the issue. I can only make the request.  Cheers, Lou.

  • 3 kudos
2 More Replies
giladba
by New Contributor III
  • 939 Views
  • 2 replies
  • 0 kudos

Network Connectivity Configurations - assign to workspace

Hi,Following these api calls Databricks has not actually applied the NCC to the workspace, despite returning a success status. All values are correct (ncc id , workspace id) What could be the issue?:# 1. Get list of NCCs to confirm ID and region - th...

  • 939 Views
  • 2 replies
  • 0 kudos
Latest Reply
giladba
New Contributor III
  • 0 kudos

Thanks for the reply. The region is the same and the workspace is running.

  • 0 kudos
1 More Replies
rocky5
by New Contributor III
  • 4977 Views
  • 6 replies
  • 0 kudos

System.billing.usage table - cannot match job_id from databricks api/UI

Hello, I have multiple continuous jobs that are running for many days (Kafka stream), however querying System.billing.usage table by job_id from UI or databricks job api not return any results for those jobs.1. What is the reason behind that?2. If I ...

  • 4977 Views
  • 6 replies
  • 0 kudos
Latest Reply
namanphy
New Contributor II
  • 0 kudos

What is the update on this? I am also unable to see my continuous Jobs usage in system.billing.usage table - Although the run information, Task information is available in system.lakeflow.job_run_timeline and system.lakeflow.job_task_run_timeline. Pl...

  • 0 kudos
5 More Replies
jash281098
by New Contributor II
  • 652 Views
  • 0 replies
  • 0 kudos

Issues when configuring keystore spark config for pyspark to mongo atlas X.509 connectivity

Step followed - Step1: To add init script that will copy the keystore file in the tmp location.Step2: To add spark config in cluster advance options - spark.driver.extraJavaOptions -Djavax.net.ssl.keyStore=/tmp/keystore.jks -Djavax.net.ssl.keyStorePa...

  • 652 Views
  • 0 replies
  • 0 kudos