cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Sudheer2
by Visitor
  • 16 Views
  • 0 replies
  • 0 kudos

Updating SQL Warehouse using Terraform

Manual ApproachWe can Update SQL Warehouse manually in Databricks.Click SQL Warehouses in the sidebarIn Advanced optionsWe can find Unity Catalog toggle button there! While Updating Existing SQL Warehouse in Azure to enable unity catalog using terraf...

warehouse error.png
  • 16 Views
  • 0 replies
  • 0 kudos
RobertAnderson6
by New Contributor
  • 44 Views
  • 1 replies
  • 0 kudos

Support for JDBC tables in SQL endpoint.

Hello,I'm wondering if there's a method or workaround to execute JDBC table queries in a similar manner to other cluster types. Currently, attempting to do so results in an error stating that only text-based files (such as JSON, Parquet, Delta, etc.)...

  • 44 Views
  • 1 replies
  • 0 kudos
Latest Reply
adurand-accure
New Contributor
  • 0 kudos

Could you explain more what you are trying to achieve ? 

  • 0 kudos
shanebo425
by New Contributor
  • 186 Views
  • 4 replies
  • 0 kudos

Resolved! Grant Unity Catalog Access without Workspace Access

We have created a Unity Catalog instance on top of our Lakehouse (built entirely with Azure Databricks). We are using Power BI to develop and serve our analytics and reporting needs. I've granted the "Account Users" group the appropriate privileges f...

Warehousing & Analytics
Databricks
permissions
Unity Catalog
  • 186 Views
  • 4 replies
  • 0 kudos
Latest Reply
shanebo425
New Contributor
  • 0 kudos

Thanks for explaining this! This doesn't do exactly what I was hoping—it doesn't block all access to the workspace. Users can still login and access their own workspace and run SQL queries, explore the catalog, etc. But they ARE blocked from accessin...

  • 0 kudos
3 More Replies
bradleyjamrozik
by New Contributor III
  • 49 Views
  • 0 replies
  • 0 kudos

ODBC Connection Does Not Disconnect

I have an on-premises Power BI Report Server that uses the Simba Spark ODBC Driver (2.8) to connect to Databricks. It can connect to a serverless warehouse successfully and run its queries, but it never seems to disconnect the session, and so the war...

  • 49 Views
  • 0 replies
  • 0 kudos
KrzysztofPrzyso
by New Contributor II
  • 89 Views
  • 0 replies
  • 0 kudos

exposing RAW files using read_files based views, partition discovery and skipping, performance issue

Hi,As a formal requirement in my project I need to keep original, raw (mainly CSVs and XMLs) files on the lake. Later on they are being ingested into Delta format backed medallion stages, bronze, silver, gold etc.From the audit, operations and discov...

  • 89 Views
  • 0 replies
  • 0 kudos
DataFarmer
by New Contributor II
  • 318 Views
  • 2 replies
  • 1 kudos

Data Warehouse in Databricks Date values as date or int: what is recommended?

In  relational data warehouse systems it was best practise to represent date values as YYYYMMDD integer type values in tables. Date comparison could be done easily without using date-functions and with low performance impact.Is this still the recomme...

  • 318 Views
  • 2 replies
  • 1 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 1 kudos

Hi @DataFarmer I Databricks I will advise you to use date type instead of int, this will make your life much simpler while working on the date type data.

  • 1 kudos
1 More Replies
JustinM
by New Contributor II
  • 1475 Views
  • 4 replies
  • 2 kudos

Cannot connect to SQL Warehouse using JDBC connector in Spark

When trying to connect to a SQL warehouse using the JDBC connector with Spark the below error is thrown. Note that connecting directly to a cluster with similar connection parameters works without issue, the error only occurs with SQL Warehouses.py4j...

  • 1475 Views
  • 4 replies
  • 2 kudos
Latest Reply
jmms
New Contributor II
  • 2 kudos

Same error here, I am trying to save spark dataframe to Delta lake using JDBC driver and pyspark using this code:#Spark session spark_session = SparkSession.builder \ .appName("RCT-API") \ .config("spark.metrics.namespace", "rct-a...

  • 2 kudos
3 More Replies
Kroy
by Contributor
  • 638 Views
  • 3 replies
  • 2 kudos

Not able to create SQL warehouse cluster in free subscription

I have taken a free subscription to azure databricks, but when try to create 2x small warehouse clusture, getting below error, help appreciated. 

Kroy_0-1702694045718.png
  • 638 Views
  • 3 replies
  • 2 kudos
Latest Reply
TimJB
New Contributor II
  • 2 kudos

Can somebody please answer this? I'm having the same issue. 

  • 2 kudos
2 More Replies
florent
by New Contributor III
  • 1817 Views
  • 7 replies
  • 6 kudos

Resolved! it's possible to deliver a sql dashboard created in a Dev workspace to a Prod workspace?

In order to create a ci/cd pipeline to deliver dashboards (here monitoring), how to export / import a dashboard created in databricks sql dashboard from one workspace to another?Thanks

  • 1817 Views
  • 7 replies
  • 6 kudos
Latest Reply
miranda_luna_db
Contributor II
  • 6 kudos

Recommendation is to update your legacy dashboard to Lakeview and then leverage built in export/import support.

  • 6 kudos
6 More Replies
Kaizen
by Contributor III
  • 243 Views
  • 1 replies
  • 1 kudos

Command to display all computes available in your workspace

Hi Is there a command you could use to list all computes configured in your workspace (active and non-active).  This would be really helpful for anyone managing the platfrom to pull all the meta data (tags ,etc) and quickly evaluate all the configura...

  • 243 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@Kaizen You've got three ways of doing this:- Using REST API (https://docs.databricks.com/api/workspace/clusters/list),- Using CLI (https://github.com/databricks/cli/blob/main/docs/commands.md#databricks-clusters-list---list-all-clusters)- Using Pyth...

  • 1 kudos
Ramakrishnan83
by New Contributor III
  • 157 Views
  • 1 replies
  • 0 kudos

Intermittent SQL Failure on Databricks SQL Warehouse

Team,I did setup a SQL Warehouse Cluster to support request from Mobile devices through REST API. I read through the documentation of concurrent query limit which is 10. But in my scenario I had 5 small clusters and the query monitoring indicated the...

  • 157 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Ramakrishnan83,  Databricks SQL does indeed support concurrent read requests. However, the exact definition of concurrency can vary based on the cluster configuration and workload.By default, Databricks limits the number of concurrent queries per...

  • 0 kudos
pankaj2264
by New Contributor II
  • 1200 Views
  • 2 replies
  • 1 kudos

Using profile_metrics and drift_metrics

Is there any business use-case where profile_metrics and drift_metrics are used by Databricks customers.If so,kindly provide the scenario where to leverage this feature e.g data lineage,table metadata updates.

  • 1200 Views
  • 2 replies
  • 1 kudos
Latest Reply
MohsenJ
New Contributor III
  • 1 kudos

hey @pankaj2264. both profile metric and drift metric tables are created and used by Lakehouse monitoring to assess the performance of your model and data over time or relative to a baseline table. you can find all the relevant information here Intro...

  • 1 kudos
1 More Replies
techuser
by New Contributor III
  • 4837 Views
  • 10 replies
  • 1 kudos

Resolved! Databricks Liquid Cluster

Hi,Is it possible to convert existing delta table with partition having data to clustering? If so can you please suggest the steps required? I tried and searched but couldn't find any. Is it that liquid clustering can be done only for new Delta table...

  • 4837 Views
  • 10 replies
  • 1 kudos
Latest Reply
Raja_Databricks
New Contributor II
  • 1 kudos

Does Liquid Clustering accepts Merge or How Upsert can be done efficiently with Liquid clustered delta table

  • 1 kudos
9 More Replies
rocky5
by New Contributor III
  • 788 Views
  • 1 replies
  • 0 kudos

Resolved! Incorrect results of row_number() function

I wrote simple code:from pyspark.sql import SparkSession from pyspark.sql.window import Window from pyspark.sql.functions import row_number, max import pyspark.sql.functions as F streaming_data = spark.read.table("x") window = Window.partitionBy("BK...

  • 788 Views
  • 1 replies
  • 0 kudos
Latest Reply
ThomazRossito
New Contributor III
  • 0 kudos

Hi,In my opinion the result is correctWhat needs to be noted in the result is that it is sorted by the "Onboarding_External_LakehouseId" column so if there is "BK_AccountApplicationId" with the same code, it will be partitioned into 2 row_numbersJust...

  • 0 kudos
jcozar
by Contributor
  • 868 Views
  • 2 replies
  • 0 kudos

Join multiple streams with watermarks

Hi!I receive three streams from a postgres CDC. These 3 tables, invoices users and products, need to be joined. I want to use a left join with respect the invoices stream. In order to compute correct results and release old states, I use watermarks a...

  • 868 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jcozar, It seems you’re encountering an issue with multiple event time columns in your Spark Structured Streaming join. Let’s break down the problem and find a solution. Event Time Columns: In Spark Structured Streaming, event time is crucia...

  • 0 kudos
1 More Replies
Labels
Top Kudoed Authors