- 779 Views
- 1 replies
- 1 kudos
Resolved! service principal control plane access management
hi, our account admin has created a service principal to automate job execution. however, our security team is concerned that, by design, anyone with the service principal credentials might access the control plane, where the service principal is def...
- 779 Views
- 1 replies
- 1 kudos
- 1 kudos
On the docs it states: Service principals give automated tools and scripts API-only access to Databricks resources, providing greater security than using users accounts.https://docs.databricks.com/gcp/en/admin/users-groups/service-principals#what-is-...
- 1 kudos
- 560 Views
- 1 replies
- 0 kudos
Issue with Verification Code Input on Login
Hello, I hope you're well.​I'd like to report a bug encountered when entering the verification code to log into the platform. When I type the code without Caps Lock enabled, the input field displays the characters in uppercase, but the code isn't acc...
- 560 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @lucasbergamo! Thank you for bringing this to our attention. I'll share this with the relevant team for further investigation. In the meantime, as a workaround you can continue using Caps Lock while entering the verification code to log in.
- 0 kudos
- 608 Views
- 1 replies
- 0 kudos
Is it possible restore a deleted catalog and schema
Is it possible restore a deleted catalog and schema.if CASCADE is used even though schemas and tables are present in catalog, catalog will be dropped.Is it possible to restore catalog or is possible to restrict the use of CACADE command.Thank you.
- 608 Views
- 1 replies
- 0 kudos
- 0 kudos
It is not possible to directly restore a deleted catalog or schema if they were dropped with the CASCADE option, especially in Databricks Unity Catalog. When a catalog or schema is dropped with CASCADE, all its dependent objects, such as schemas and ...
- 0 kudos
- 744 Views
- 1 replies
- 0 kudos
Is it possible expand/extend subnet CIDR of an existing azure databricks workspace
Is it possible expand/extend subnet CIDR of an existing azure databricks workspace. Currently our workspace is maxed out, Is it possible expand/extend subnet CIDR of an existing azure databricks workspace without having create a new one
- 744 Views
- 1 replies
- 0 kudos
- 0 kudos
Yes, it is possible to expand or extend the subnet CIDR of an existing Azure Databricks workspace without creating a new one, but this capability is specifically applicable if the workspace is deployed with VNet injection. For workspaces that use V...
- 0 kudos
- 1874 Views
- 3 replies
- 1 kudos
Azure Databricks Status
Dear all,I wanted to check if anyone implemented the solution of capturing information from Databricks status page in real-time 24x7 and load that into a log or table...https://learn.microsoft.com/en-us/azure/databricks/resources/statuswhat is the be...
- 1874 Views
- 3 replies
- 1 kudos
- 1 kudos
It seems that the webhook is the way!There is nothing about system status in Databricks REST API.There is nothing about system status in the System Tables schema.
- 1 kudos
- 2388 Views
- 0 replies
- 0 kudos
Mismatch cuda/cudnn version on Databricks Runtime GPU ML version
I have a cluster on Databricks with configuration Databricks Runtime Version16.4 LTS ML Beta (includes Apache Spark 3.5.2, GPU, Scala 2.12), and another cluster with configuration 16.0 ML (includes Apache Spark 3.5.2, GPU, Scala 2.12). According to...
- 2388 Views
- 0 replies
- 0 kudos
- 466 Views
- 1 replies
- 2 kudos
for_each_task with pool clusters
I am trying to run a `for_each_task` across different inputs of length `N` and `concurrency` `M` where N >> M. To mitigate cluster setup time I want to use pool clusters.Now, when I set everything up, I notice that instead of `M` concurrent clusters...
- 466 Views
- 1 replies
- 2 kudos
- 2 kudos
Hi @david_btmpl When you set up a Databricks workflow using for_each_task with a cluster pool (instance_pool_id), Databricks will, by default, reuse the same cluster for all concurrent tasks in that job. So even if you’ve set a higher concurrency (li...
- 2 kudos
- 2110 Views
- 0 replies
- 0 kudos
Query has been timed out due to inactivity while connecting from Tableau Prep
Hi,We are experiencing Query timed out error while running Tableau flows with connections to Databricks. The query history for Serverless SQL warehouse initially showing as finished in Databricks. But later the query status change to "Query has been ...
- 2110 Views
- 0 replies
- 0 kudos
- 2493 Views
- 2 replies
- 2 kudos
Query has been timed out due to inactivity.
Hi,We're experiencing an issue with SQL Serverless Warehouse when running queries through the dbx-sql-connector in Python. The error we get is: "Query has been timed out due to inactivity."This happens intermittently, even for queries that should com...
- 2493 Views
- 2 replies
- 2 kudos
- 2 kudos
Getting the same error while trying to run Tableau flow on Databricks. Is there a solution for this issue?
- 2 kudos
- 768 Views
- 1 replies
- 2 kudos
Service Principal Authentication / Terraform
Hello Databricks Community,I'm encountering an issue when trying to apply my Terraform configuration to create a Databricks MWS network on GCP. The terraform apply command fails with the following error: Error: cannot create mws networks: failed duri...
- 768 Views
- 1 replies
- 2 kudos
- 2 kudos
Databricks account-level APIs can only be called by account owners and account admins and can only be authenticated using Google-issued OIDC tokens.In Terraform 0.13 and later, data resources have the same dependency resolution behavior as defined fo...
- 2 kudos
- 3687 Views
- 7 replies
- 2 kudos
Exact cost for job execution calculation
Hi everybody,I want to calculate the exact cost of single job execution. In all examples I can find on the internet it uses the tables system.billing.usage and system.billing.list_prices. It makes sense to calculate the sum of DBUs consumed and multi...
- 3687 Views
- 7 replies
- 2 kudos
- 2 kudos
And what about the costs for the disks of the VMs of the cluster?
- 2 kudos
- 932 Views
- 2 replies
- 2 kudos
Impossible to access Terraform created external location?!
Hi all,There seems to be an external location created that nobody within the organization can actually see or manage, because it has been created with a Google service account in Terraform.Here is the problem:DESCRIBE EXTERNAL LOCATION `gcsbucketname...
- 932 Views
- 2 replies
- 2 kudos
- 2 kudos
I would agree that the metastore admin(s) should be able to see the external location. This issue can happen with terraform scripts if the script doesn't grant additional rights on the external location.
- 2 kudos
- 679 Views
- 1 replies
- 0 kudos
Unexpected Behavior with Azure Databricks and Entra ID SCIM Integration
Hi everyone,I'm currently running some tests for a company that uses Entra ID as the backbone of its authentication system. Every employee with a corporate email address is mapped within the organization's Entra ID.Our company's Azure Databricks is c...
- 679 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @antonionuzzo, This behavior is occurring because Azure Databricks allows workspace administrators to invite users from their organization's Entra ID directory into the Databricks workspace. This capability functions independently of whether th...
- 0 kudos
- 1077 Views
- 3 replies
- 1 kudos
Monitor workspace admin activities
Hello everyone,I am conducting tests on Databricks AWS and have noticed that in an organization with multiple workspaces, each with different workspace admins, a workspace admin can invite a user who is not mapped within their workspace but is alread...
- 1077 Views
- 3 replies
- 1 kudos
- 1 kudos
You do have some control over what workspace admins can do. Databricks allows account admins to restrict workspace admin permissions by enabling the RestrictWorkspaceAdmins setting. Have a look here: https://docs.databricks.com/aws/en/admin/workspace...
- 1 kudos
- 1305 Views
- 1 replies
- 2 kudos
Resolved! Predictive Optimization with multiple workspaces
We currently have an older instance of Azure Databricks that i migrated to Unity Catalog. Unfortunately i ran into some weird issues that don't seem fixable so i created a new instance and pointed it to the same metastore. The setting at the metastor...
- 1305 Views
- 1 replies
- 2 kudos
- 2 kudos
Hi @KIRKQUINBAR, if you enable Predictive Optimization at the metastore level in Unity Catalog, it automatically applies to all Unity Catalog managed tables within that metastore, no matter which workspace is accessing them. PO runs centrally, so the...
- 2 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
1 -
Apache spark
1 -
AWS
5 -
Azure
7 -
Azure databricks
5 -
Billing
2 -
Cluster
1 -
Compliance
1 -
Data Ingestion & connectivity
5 -
Databricks Runtime
1 -
Databricks SQL
2 -
DBFS
1 -
Dbt
1 -
Delta
4 -
Delta Sharing
1 -
DLT Pipeline
1 -
GA
1 -
Gdpr
1 -
Github
1 -
Partner
38 -
Public Preview
1 -
Service Principals
1 -
Unity Catalog
1 -
Workspace
2
- « Previous
- Next »
User | Count |
---|---|
75 | |
36 | |
25 | |
17 | |
12 |