Recrusive cte in databrick sql
Hi Team,How to write recrusive cte in databricks SQL.Please let me know any one have solution for this
- 4247 Views
- 6 replies
- 0 kudos
- 0 kudos
It is still not supported. Not sure when it will be (if ever).
- 0 kudos
Hi Team,How to write recrusive cte in databricks SQL.Please let me know any one have solution for this
It is still not supported. Not sure when it will be (if ever).
I have a terraform project that creates a workspace in Databricks, assigns it to an existing metastore, then creates external location/storage credential/catalog. The apply works and all expected resources are created. However, without touching any r...
Hi @lindsey, It seems you’re encountering an issue with Terraform and Databricks when trying to destroy resources. Let’s explore some potential solutions to address this problem: Resource Order in Terraform Configuration: Ensure that the databric...
I have tried multiples way to set row group for delta tables on data bricks notebook its not working where as I am able to set it properly using spark.I tried 1. val blockSize = 1024 * 1024 * 60spark.sparkContext.hadoopConfiguration.setInt( "dfs.bloc...
Hi @dlaxminaresh, Setting row groups for Delta tables in Databricks can be a bit tricky, but let’s explore some options to achieve this. First, let’s address the approaches you’ve tried: Setting Block Sizes: You’ve attempted to set the block size...
I am trying to set up CI/CD with azure devops and 3 workspaces, dev, test, prod using asset bundlesAll 3 workspaces will have their own catalog in unity catalog. I can't find a way to change which catalog should be used by the jobs and dlt pipelines ...
Hi @JonathanFlint, Setting up CI/CD with Azure DevOps for Unity projects involving multiple workspaces and catalogs can be achieved. Here are some approaches you can consider: Catalog Switching at Runtime: At the beginning of your program, issue ...
I understand only a limited spark configurations are supported in SQL Warehouse but is it possible to add spark extensions to SQL Warehouse clusters?Use Case: We've a few restricted table properties. We prevent that with spark extensions installed in...
Hi @naveenanto, While SQL Data Warehouse (now known as Azure Synapse Analytics) has some limitations when it comes to Spark configurations, you can indeed extend its capabilities by adding custom Spark extensions. Let me provide you with some inform...
from databricks.sdk import WorkspaceClientw = WorkspaceClient()ValueError: default auth: cannot configure default credentials ...I'm trying to instantiate a WorkspaceClient in a notebook on a cluster running a Docker image, but authentication fails.T...
Hi @JohanS, It seems you’re encountering an authentication issue when trying to instantiate a WorkspaceClient in a Docker image running Databricks. Let’s troubleshoot this! The error message you’re seeing, “default auth: cannot configure defau...
Hello,Some variations of this question have been asked before but there doesn't seem to be an answer for the following simple use case:I have the following file structure on a Databricks Asset Bundles project: src --dir1 ----file1.py --dir2 ----file2...
Hi,I have a DLT pipeline that applies changes from a source table (cdctest_cdc_enriched) to a target table (cdctest), by the following code:dlt.apply_changes( target = "cdctest", source = "cdctest_cdc_enriched", keys = ["ID"], sequence_by...
Hi @Anske, It seems you’re encountering an issue with your Delta Live Tables (DLT) pipeline where updates from the source table are not being correctly applied to the target table. Let’s troubleshoot this together! Pipeline Update Process: Whe...
Hello everyone,I have a workflow setup that updates a few Delta tables incrementally with autoloader three times a day. Additionally, I run a separate workflow that performs VACUUM and OPTIMIZE on these tables once a week.The issue I'm facing is that...
Hi,I have configured 20 different workflows in Databricks. All of them configured with job cluster with different name. All 20 workfldows scheduled to run at same time. But even configuring different job cluster in all of them they run sequentially w...
HI @jainshasha i tried to replicate your problem but in my case i was able to run jobs in parallel(the only difference is that i am running notebook from workspace, not from repo)As you can see jobs did not started exactly same time but it run in par...
I need help with migrating from dbfs on databricks to workspace. I am new to databricks and am struggling with what is on the links provided.My workspace.yml also has dbfs hard-coded. Included is a full deployment with great expectations.This was don...
One of the other suggestions is to use Lakehouse Federation. It is possible it may be a driver issue (we will get to know from the logs)
Hi there everyone,We are trying to get hands on Databricks Lakehouse for a prospective client's project.Our Major aim for the project is to Compare Datalakehosue on Databricks and Bigquery Datawarehouse in terms of Costs and time to setup and run que...
Hi @ashraf1395, Comparing Databricks Lakehouse and Google BigQuery is essential to make an informed decision for your project. Let’s address your questions: Cluster Configurations for Databricks: Databricks provide flexibility in configuring com...
Hello, I am trying to download lists from SharePoint into a pandas dataframe. However I cannot get any information successfully. I have attempted many solution mentioned in stackoverflow. Below is one of those attempts: # https://pypi.org/project/sha...
The error "<urlopen error [Errno -2] Name or service not known>" suggests that there's an issue with the server URL or network connectivity. Double-check the server URL to ensure it's correct and accessible. Also, verify that your network connection ...
Hello, we're working with a serverless SQL cluster to query Delta tables and display some analytics in dashboards. We have some basic group by queries that generate around 36k lines, and they are executed without the "limit" key word. So in the data ...
Hey @RabahO This is likely a memory issue. The current behavior is that Databricks will only attempt to display the first 64000 rows of data. If the first 64000 rows of data are larger than 2187 MB, then it will fail to display anything. In your cas...
Hi All,I want to add a member to a group in databricks account level using rest api (https://docs.databricks.com/api/azure/account/accountgroups/patch) as mentioned in this link I could able to authenticate but not able to add member while using belo...
Hi @Kaniz I have tried suggest body also but still member is not added to group. is there any other method that i can use add member to the group at account levelThanks,Phani.
User | Count |
---|---|
1603 | |
736 | |
344 | |
284 | |
247 |