- 1021 Views
- 2 replies
- 0 kudos
Workflow job runs are disabled
I'm not totally clear on the financial details, but from what I've been told: A few months our contract with Databricks expired and changed in a per-month subscription. In those months there was a problem with payments due to bills being sent to a wr...
- 1021 Views
- 2 replies
- 0 kudos
- 0 kudos
We contacted them, but were told that we could only use community support unless we got a premium support subscription (not sure about the exact term, somebody else asked them).Our account ID is ddcb191f-aff5-4ba5-be46-41adf1705e03. If the workspace...
- 0 kudos
- 594 Views
- 1 replies
- 0 kudos
How to set a static IP to a cluster
Is there a way to set a static IP to a cluster on the Databricks instance? I'm trying to establish connection with a service outside AWS and it seems the only way to allow inbound connections is by adding the IP to a set of rules. thanks!I couldn’t f...
- 594 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi, @Georgi Databricks clusters on AWS don’t have a built‐in way to assign a static IP address. Instead, the typical workaround is to route all outbound traffic from your clusters through a NAT Gateway (or similar solution) that has an Elastic IP ass...
- 0 kudos
- 2975 Views
- 1 replies
- 1 kudos
Resolved! Understanding Azure frontend private link endpoints
Hi,I've been reading up on private link (https://learn.microsoft.com/en-us/azure/databricks/security/network/classic/private-link) and have some questions:In the standard deployment, do the transit VNet (frontend private endpoint) and Databricks work...
- 2975 Views
- 1 replies
- 1 kudos
- 1 kudos
Below are the answers to your questions -1) No, they don’t have to be in the same subscription. You can have the transit VNet (with the front-end Private Endpoint) in one subscription and the Databricks workspace in another, as long as you set up the...
- 1 kudos
- 5035 Views
- 2 replies
- 2 kudos
Using a proxy server to install packages from PyPI in Azure Databricks
Hi,I'm setting up a workspace in Azure and would like to put some restrictions in place on outbound Internet access to reduce the risk of data exfiltration from notebooks and jobs. I plan to use VNet Injection and SCC + back-end private link for comp...
- 5035 Views
- 2 replies
- 2 kudos
- 2 kudos
Thanks Isi, this is great info. I'll update once I've tried it.
- 2 kudos
- 1428 Views
- 4 replies
- 1 kudos
help undersanding RAM utilization graph
I am trying to understand the following graph databricks is showing me and failing:What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizi...
- 1428 Views
- 4 replies
- 1 kudos
- 1 kudos
Hi @meshko The light-shaded area represents the total available RAM size. The tooltip shows it when you hover over a mouse.
- 1 kudos
- 4329 Views
- 1 replies
- 2 kudos
Create account group with terraform without account admin permissions
I’m trying to create an account-level group in Databricks using Terraform. When creating a group via the UI, it automatically becomes an account-level group that can be reused across workspaces. However, I’m struggling to achieve the same using Terra...
- 4329 Views
- 1 replies
- 2 kudos
- 2 kudos
I am also interested in the solution for this! Workspace-level groups cannot be used to grant permissions on Unity Catalog resources so I also need to be able to create account-level groups in terraform while not being an account admin.
- 2 kudos
- 1033 Views
- 2 replies
- 0 kudos
Resolved! How can i run a single task in job from Rest API
How can I run a single task in a job that has many tasks?I can do it in the UI, but I can’t find a way to do it using the REST API. Does anyone know how to accomplish this?
- 1033 Views
- 2 replies
- 0 kudos
- 0 kudos
It looks like this may be a possibility now? I haven't actually tried it, but I noticed a parameter named "only" has been added to the Databricks SDK for when running a job. Here is the commit that made the change: [Release] Release v0.38.0 (#826) · ...
- 0 kudos
- 2890 Views
- 0 replies
- 0 kudos
Databricks Managed MLFlow with Different Unity Catalog for Multi-tenant Production Tracing
Is the Databricks Managed MLFlow only trace LLM traffic through Serving Endpoint? Does it support manual tracing in my LLM application with decorator @mlflow.trace ?Also, How can Databricks Managed MLFlow support multi-tenant cases where traces need ...
- 2890 Views
- 0 replies
- 0 kudos
- 1433 Views
- 1 replies
- 1 kudos
Assigning Dedicated (SINGLE_USER) ML Clusters to a Group in Databricks
I'm working with Databricks Runtime ML and have configured a cluster in Dedicated access mode (formerly SINGLE_USER). The documentation indicates that a compute resource with Dedicated access can be assigned to a group, allowing user permissions to a...
- 1433 Views
- 1 replies
- 1 kudos
- 1 kudos
Hey @Mr_7199 Yes, I’ve successfully configured a dedicated ML cluster assigned to a group.Here are three things to check:1.Cluster Policy – Ensure the cluster policy does not impose restrictions. Using an unrestricted policy simplifies testing.2.Perm...
- 1 kudos
- 1570 Views
- 1 replies
- 1 kudos
Disable usage of serverless jobs & serverless all-purpose clusters usage
Dear all,I see some developers started using serverless jobs and serverless all-purpose clusters. as a platform admin, I like to disable them as we are not yet prepared as a team to move to serverless; we get huge discounts on compute from Microsoft ...
- 1570 Views
- 1 replies
- 1 kudos
- 1 kudos
You can disable the serverless compute featuure from your account console :https://docs.databricks.com/aws/en/admin/workspace-settings/serverless#enable-serverless-computeI have heard that for some ,If this option is not available , it means it is au...
- 1 kudos
- 1581 Views
- 2 replies
- 0 kudos
How to restore if a catalog is deleted
I am looking to identify potential pitfall in the decentralized workspace framework where the key business owner have full access to their respective workspace and catalogs. In case of accidental delete/drop schema or catalog from the UC, what are th...
- 1581 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @bhanu_dp , To retrieve accidental deletes, you can -1. Restore it to a previous version using time travel featurehttps://docs.databricks.com/gcp/en/delta/history#restore-a-delta-table-to-an-earlier-state2. Use UNDROP commandhttps://docs.databrick...
- 0 kudos
- 2608 Views
- 1 replies
- 1 kudos
How to Query All the users who have access to a databricks workspace?
Hi There,I'm new to Databricks and we currently have a lot of users among different groups having access to a databricks workspace. I would like to know how I could query the users, groups and Entitlements of each groups using SQL or the API. Incase ...
- 2608 Views
- 1 replies
- 1 kudos
- 1 kudos
To query all users who have access to a Databricks workspace, you can follow these steps:1. Check Workspace Users via Admin ConsoleIf you are a workspace admin, navigate to the Admin Console in the Databricks UI. Under the "Users" tab, you can view a...
- 1 kudos
- 2807 Views
- 0 replies
- 0 kudos
Databricks on GCP admin console access
Hi,I'm trying to update the GCP permissions for Databricks as described here: https://docs.databricks.com/gcp/en/admin/cloud-configurations/gcp/gce-updateTo be able to do that, I have to log in to the account console here: https://accounts.gcp.databr...
- 2807 Views
- 0 replies
- 0 kudos
- 814 Views
- 1 replies
- 0 kudos
Spark Executor - Parallelism Question
I was reading the book Spark: The Definitive Guide, I came across below statement in Chapter 2 on partitions."If you have many partitions but only one executor, Spark will still have a parallelism of only one because there is only one computation res...
- 814 Views
- 1 replies
- 0 kudos
- 0 kudos
Hey @SANJAYKJ It is correct in the sense that a single executor is a limiting factor, but the actual parallelism within that executor depends on the number of cores assigned to it. If you want to leverage multiple partitions effectively, you either n...
- 0 kudos
- 2131 Views
- 4 replies
- 0 kudos
Resolved! Possible to programmatically adjust Databricks instance pool more intelligently?
We'd like to adopt Databricks instance pool in order to reduce instance-acquisition times (a significant contributor to our test latency). Based on my understanding of the docs, the main levers we can control are: min instance count, max instance cou...
- 2131 Views
- 4 replies
- 0 kudos
- 0 kudos
Hi Steve,If the goal is to pre-warm 100 instances in the Databricks Instance Pool, you could create a temporary job that will request instances from the pool. This ensures that Databricks provisions the required instances before the actual test run.T...
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
1 -
Apache spark
1 -
Azure
7 -
Azure databricks
5 -
Billing
2 -
Cluster
1 -
Compliance
1 -
Data Ingestion & connectivity
5 -
Databricks Runtime
1 -
Databricks SQL
2 -
DBFS
1 -
Dbt
1 -
Delta Sharing
1 -
DLT Pipeline
1 -
GA
1 -
Gdpr
1 -
Github
1 -
Partner
43 -
Public Preview
1 -
Service Principals
1 -
Unity Catalog
1 -
Workspace
2
- « Previous
- Next »
User | Count |
---|---|
103 | |
37 | |
27 | |
25 | |
19 |