by
BryanC
• New Contributor II
- 596 Views
- 5 replies
- 0 kudos
How can I find the saved/pre-defined queries in Databricks system tables?system.query.history seems NOT having the info, like query-id or query-name
- 596 Views
- 5 replies
- 0 kudos
Latest Reply
Hi Bryan, Databricks system tables do not store saved queries. Query history table captures the query execution details, including:
Statement IDExecution statusUser who ran the queryStatement text (if not encrypted)Statement typeExecution durationRes...
4 More Replies
- 1079 Views
- 2 replies
- 2 kudos
Hi everyone,I’m working on implementing Structured Streaming in Databricks to capture Change Data Capture (CDC) as part of a Medallion Architecture (Bronze, Silver, and Gold layers). While Microsoft’s documentation provides a theoretical approach, I’...
- 1079 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @JissMathew ,Do you have access to databricks academy? I believe in their data engineering track there's pleny of example notebooks.Or you can try dbdemos. For example, here you can find demo notebook for autoloaderDatabricks Autoloader (cloudfile...
1 More Replies
- 327 Views
- 1 replies
- 1 kudos
Hey all,Do you know if it's possible to create multiple volumes referencing the same s3 bucket from the same external location?For example, if I have two workspaces (test and prod) testing different versions of pipeline code but with static data I'd ...
- 327 Views
- 1 replies
- 1 kudos
Latest Reply
Yes, it is a limitation, and it is not possible to create multiple volumes referencing the same S3 bucket. This restriction ensures consistency and prevents conflicts when accessing the same data source.Possible Solution:Use subdirectories within the...
- 1047 Views
- 1 replies
- 0 kudos
Hi,We're experiencing an issue with SQL Serverless Warehouse when running queries through the dbx-sql-connector in Python. The error we get is: "Query has been timed out due to inactivity."This happens intermittently, even for queries that should com...
- 1047 Views
- 1 replies
- 0 kudos
Latest Reply
Possible reasons for this error may include:The warehouse is busy or waiting for compute resources.Connection or network issues.Solutions to try:Increase the timeout duration and try again.If the issue persists, please share the error message for fur...
- 275 Views
- 3 replies
- 0 kudos
Is it somehow possible to create a message or alerting for specific Databricks environments to make people more aware that they are using e.g. a PROD environment?It can be reflected in the environment name like "dev" or "prod", yes. But it would be n...
- 275 Views
- 3 replies
- 0 kudos
Latest Reply
Seems that for Azure the process is a little bit different you might follow steps in https://learn.microsoft.com/en-us/azure/databricks/resources/ideas
2 More Replies
- 236 Views
- 2 replies
- 0 kudos
Hey,In order to create more meaningful monitoring or usage or few platformic jobs I am using I need to be able to access the job_parameters object of jon runs.While job_parameters exists in system.workflow.job_run_timeline table, it is not populated ...
- 236 Views
- 2 replies
- 0 kudos
Latest Reply
@yairofek wrote:Hey,In order to create more meaningful monitoring or usage or few platformic jobs I am using I need to be able to access the job_parameters object of jon runs.While job_parameters exists in system.workflow.job_run_timeline table, it ...
1 More Replies
- 3961 Views
- 7 replies
- 6 kudos
Hello!We have lots of Azure keyvaults that we use in our Azure Databricks workspaces. We have created secret scopes that are backed by the keyvaults. Azure supports two ways of authenticating to keyvaults:- Access policies, which has been marked as l...
- 3961 Views
- 7 replies
- 6 kudos
Latest Reply
@Chamak You can find 'AzureDatabricks' in User, group or service principal assignment. You dont need to find application id, as it will automatically displayed when you add AzureDatabricks as member.
cc: @daniel_sahal
6 More Replies
by
vsd
• New Contributor III
- 655 Views
- 5 replies
- 2 kudos
Hi Team, We need to have single public IP for all outbound traffic flowing through our Databricks cluster. The Secure Cluster Connectivity (SCC) is disabled for our cluster and currently we get dynamic public IPs assigned to the VMs under managed res...
- 655 Views
- 5 replies
- 2 kudos
- 223 Views
- 1 replies
- 0 kudos
Hello community,I deployed one month ago a resource group with a particular name, where inside there are two databricks workspaces deployed. Is it possible to rename the resource group without any problem? Or do I need to move the existed dbws to a n...
- 223 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @jeremy98 ,Unfortunately, you cannot rename resource group. You need to create new resource group and recreate all required resources.
- 258 Views
- 1 replies
- 0 kudos
Hi,I have been using Databricks for a couple of months and been spinning up workspaces with Terraform. The other day we decided to end our POC and move on to a MVP. This meant cleaning up all workspaces and GCP. after the cleanup was done I wanted to...
- 258 Views
- 1 replies
- 0 kudos
Latest Reply
This could be related to quota limits, permissions, or other configuration issues. Ensuring that the necessary permissions are set and that the quota limits are correctly configured might help resolve the issue.
- 252 Views
- 1 replies
- 0 kudos
I could be mistaken, but it seem like the systems table contain data of all workspaces, even workspaces that you don't have access to. According to "least principle privilege" idea, I do not think that's a good idea.If forementioned is correct, has s...
- 252 Views
- 1 replies
- 0 kudos
Latest Reply
As per documentation it is confirmed that system tables include data from all workspaces in your account, but they can only be accessed by a workspace with Unity Catalog, you can restrict which admins has access to this system tables.It is not possib...
- 3469 Views
- 1 replies
- 0 kudos
I need assistance with writing API/Python code to manage a Databricks workspace permissions database(unity catalog). The task involves obtaining a list of workspace details from the account console, which includes various details like Workspace name,...
- 3469 Views
- 1 replies
- 0 kudos
Latest Reply
Here's a start.
https://docs.databricks.com/api/workspace/workspacebindings/updatebindings
As far as coding, I use CURL. See attachment as to the syntax. Note the example in the attachment is for Workspace notebooks, as opposed to Workspace envir...
- 332 Views
- 1 replies
- 0 kudos
Hello guys,I would like to get hardware metrics like server load distribution, CPU utilization, memory utilization and send it to Azure Monitor. Is there any way to do this? Can you help me with this doubt?Thanks.
- 332 Views
- 1 replies
- 0 kudos
Latest Reply
@xzero-trustx wrote:Hello guys,I would like to get hardware metrics like server load distribution, CPU utilization, memory utilization and send it to Azure Monitor. Is there any way to do this? Can you help me with this doubt?Thanks.Hello!Yes, you ca...
- 814 Views
- 1 replies
- 0 kudos
Hello, I saw multiple topics about it, but I need explanations and a solution.In my context, we have developers that are developing Python projects, like X.In Databricks, we have a cluster with a library of the main project A that is dependent of X.p...
- 814 Views
- 1 replies
- 0 kudos
Latest Reply
I saw that solution may be in the init script but it's not really essy to work with.I mean, there's no log generated from the bash script, so this is not an easy way to solve my problem, maybe you have some advices about it?
- 285 Views
- 1 replies
- 1 kudos
Hi,I have situation where I can run my notebook without any issue when I use a 'normal' cluster. However, when I run the exact same notebook in a job cluster it fails.It fails at the point where it runs the cell:`%run ../utils/some_other_notebook`And...
- 285 Views
- 1 replies
- 1 kudos
Latest Reply
Not sure what went wrong but after pulling the sources (notebooks) again from GIT it now works both for my 'normal' cluster and the 'job' cluster.Case closed for me...