cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

dataminion01
by New Contributor II
  • 3326 Views
  • 0 replies
  • 0 kudos

DLT constantly failing with time out errors

DLT was working but then started getting time outs frequentlycom.databricks.pipelines.common.errors.deployment.DeploymentException: Failed to launch pipeline cluster xxxxxxxxxxxx: Self-bootstrap timed out during launch. Please try again later and con...

  • 3326 Views
  • 0 replies
  • 0 kudos
ironv
by New Contributor
  • 3373 Views
  • 0 replies
  • 0 kudos

Unable to query using multi-node clusters but works with serverless warehouse & single-node clusters

We have a schema with 10tables and currently all 4 users have ALL access.  When I (or any other user) spin up a serverless SQL warehouse, I am able to query one of the tables (million rows) in SQL Editor and get a response within seconds.  `select co...

  • 3373 Views
  • 0 replies
  • 0 kudos
satniks_o
by New Contributor III
  • 4401 Views
  • 5 replies
  • 2 kudos

Resolved! How to get logged in user name/email in the databricks streamlit app?

I have created a Databricks App using streamlit and able to deploy and use it successfully.I need to get the user name/email address of the logged in user and display in the streamlit app. Is this possible?If not possible at the moment, any roadmap f...

  • 4401 Views
  • 5 replies
  • 2 kudos
Latest Reply
Carl_B
New Contributor II
  • 2 kudos

I have also tried to deploy a streamlit app, however I was not able to deploy it.

  • 2 kudos
4 More Replies
noorbasha534
by Valued Contributor II
  • 1321 Views
  • 2 replies
  • 0 kudos

Enforcing developers to use something like a single user cluster

Dear allwe have a challenge. Developers create/recreate tables/views in PRD environment by running notebooks on all-purpose clusters where as the same notebooks already exist as jobs. Not sure, why the developers feel comfortable in using all-purpose...

  • 1321 Views
  • 2 replies
  • 0 kudos
Latest Reply
noorbasha534
Valued Contributor II
  • 0 kudos

Hi Stefan, exactly, we have the same. the CI/CD process invokes jobs that run as service principal. So far, so good. But, please note that not all situations would fall under this ideal case. There will be cases wherein I have to recreate 50 views ou...

  • 0 kudos
1 More Replies
SmileyVille
by New Contributor III
  • 7552 Views
  • 3 replies
  • 0 kudos

Leverage Azure PIM with DataBricks with Contributor role privilege

We are trying to leverage Azure PIM.  This works great for most things, however; we've run into a snag.  We want to limit the contributor role to a group and only at the resource group level, not subscription.  We wish to elevate via PIM.  This will ...

  • 7552 Views
  • 3 replies
  • 0 kudos
Latest Reply
SmileyVille
New Contributor III
  • 0 kudos

Never did, so we scrapped PIM with Databricks for now.

  • 0 kudos
2 More Replies
ambigus9
by Contributor
  • 3483 Views
  • 1 replies
  • 0 kudos

R-studio on Dedicated Cluster Invalid Access Token

Hello!! Currently I have an R-studio installed on a Dedicated Cluster over Azure Databricks, here are the specs:I must to make enfasis over the Access mode: Manual and Dedicated to a Group.Here, we install R-studio using a notebook with the following...

ambigus9_0-1743020318837.png error token access.png
  • 3483 Views
  • 1 replies
  • 0 kudos
Latest Reply
ambigus9
Contributor
  • 0 kudos

Hello! It's me again, I'm also getting the following error: after testing a connection to databricks using sparklyr:Error: ! java.lang.IllegalStateException: No Unity API token found in Unity Scope Run `sparklyr::spark_last_error()` to see the full ...

  • 0 kudos
KLin
by New Contributor III
  • 1828 Views
  • 7 replies
  • 1 kudos

Resolved! Unable to Pinpoint where network traffic originates from in GCP

Hi everyone,I have a question regarding networking. A bit of background first: For security reasons, the current allow-policy from GCP to our on-prem-infrastructure is being replaced by a deny-policy for traffic originating from GCP. Therefore access...

  • 1828 Views
  • 7 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hi @KLin, happy to help! -  The reason why traffic originates from the pods subnet for clusters/SQL warehouses without the x-databricks-nextgen-cluster tag (still using GKE) and from the node subnet for clusters with the GCE tag is due to the underly...

  • 1 kudos
6 More Replies
jonas_braun
by New Contributor II
  • 3112 Views
  • 1 replies
  • 0 kudos

Asset Bundle: inject job start_time parameter

Hey!I'm deploying a job with databricks asset bundles.When the pyspark task is started on a job cluster, I want the python code to read the job start_time and select the right data sources based on that parameter.Ideally, I would read the parameter f...

  • 3112 Views
  • 1 replies
  • 0 kudos
Latest Reply
jonas_braun
New Contributor II
  • 0 kudos

The databricks cli version is Databricks CLI v0.239.1

  • 0 kudos
mnorland
by Valued Contributor
  • 4184 Views
  • 1 replies
  • 0 kudos

Resolved! Custom VPC Subranges for New GCP Databricks Deployment

What Pods and Services subranges would you recommend for a /22 subnet for a custom VPC for a new GCP Databricks deployment in the GCE era?  

  • 4184 Views
  • 1 replies
  • 0 kudos
Latest Reply
mnorland
Valued Contributor
  • 0 kudos

The secondary ranges are there to support legacy GKE clusters.  While required in the UI, they can be empty in terraform (per a source) for new deployments as clusters are GCE now. (There is a green GCE next to the cluster name.)  When observing the ...

  • 0 kudos
Jeff4
by New Contributor
  • 3049 Views
  • 0 replies
  • 0 kudos

Unable to create workspace using API

Hi all,I'm trying to automate the deployment of Databricks into GCP. In order to streamline the process, I created a standalone project to hold the service accounts SA1 and SA2, with the second one then being manually populated into the Databricks ac...

  • 3049 Views
  • 0 replies
  • 0 kudos
hartenc
by New Contributor II
  • 1016 Views
  • 2 replies
  • 0 kudos

Workflow job runs are disabled

I'm not totally clear on the financial details, but from what I've been told: A few months our contract with Databricks expired and changed in a per-month subscription. In those months there was a problem with payments due to bills being sent to a wr...

  • 1016 Views
  • 2 replies
  • 0 kudos
Latest Reply
hartenc
New Contributor II
  • 0 kudos

We contacted them, but were told that we could only use community support unless we got a premium support subscription (not sure about the exact term, somebody else asked them).Our account ID is ddcb191f-aff5-4ba5-be46-41adf1705e03. If the  workspace...

  • 0 kudos
1 More Replies
Georgi
by New Contributor
  • 585 Views
  • 1 replies
  • 0 kudos

How to set a static IP to a cluster

Is there a way to set a static IP to a cluster on the Databricks instance? I'm trying to establish connection with a service outside AWS and it seems the only way to allow inbound connections is by adding the IP to a set of rules. thanks!I couldn’t f...

  • 585 Views
  • 1 replies
  • 0 kudos
Latest Reply
Takuya-Omi
Valued Contributor III
  • 0 kudos

Hi, @Georgi Databricks clusters on AWS don’t have a built‐in way to assign a static IP address. Instead, the typical workaround is to route all outbound traffic from your clusters through a NAT Gateway (or similar solution) that has an Elastic IP ass...

  • 0 kudos
mzs
by Contributor
  • 2959 Views
  • 1 replies
  • 1 kudos

Resolved! Understanding Azure frontend private link endpoints

Hi,I've been reading up on private link (https://learn.microsoft.com/en-us/azure/databricks/security/network/classic/private-link) and have some questions:In the standard deployment, do the transit VNet (frontend private endpoint) and Databricks work...

  • 2959 Views
  • 1 replies
  • 1 kudos
Latest Reply
Zubisid
New Contributor III
  • 1 kudos

Below are the answers to your questions -1) No, they don’t have to be in the same subscription. You can have the transit VNet (with the front-end Private Endpoint) in one subscription and the Databricks workspace in another, as long as you set up the...

  • 1 kudos
mzs
by Contributor
  • 4972 Views
  • 2 replies
  • 2 kudos

Using a proxy server to install packages from PyPI in Azure Databricks

Hi,I'm setting up a workspace in Azure and would like to put some restrictions in place on outbound Internet access to reduce the risk of data exfiltration from notebooks and jobs. I plan to use VNet Injection and SCC + back-end private link for comp...

  • 4972 Views
  • 2 replies
  • 2 kudos
Latest Reply
mzs
Contributor
  • 2 kudos

Thanks Isi, this is great info. I'll update once I've tried it.

  • 2 kudos
1 More Replies
meshko
by New Contributor II
  • 1417 Views
  • 4 replies
  • 1 kudos

help undersanding RAM utilization graph

I am trying to understand the following graph databricks is showing me and failing:What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizi...

databricks.png
  • 1417 Views
  • 4 replies
  • 1 kudos
Latest Reply
koji_kawamura
Databricks Employee
  • 1 kudos

Hi @meshko  The light-shaded area represents the total available RAM size. The tooltip shows it when you hover over a mouse.    

  • 1 kudos
3 More Replies