cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

JustinM
by New Contributor II
  • 1246 Views
  • 4 replies
  • 2 kudos

Cannot connect to SQL Warehouse using JDBC connector in Spark

When trying to connect to a SQL warehouse using the JDBC connector with Spark the below error is thrown. Note that connecting directly to a cluster with similar connection parameters works without issue, the error only occurs with SQL Warehouses.py4j...

  • 1246 Views
  • 4 replies
  • 2 kudos
Latest Reply
jmms
Visitor
  • 2 kudos

Same error here, I am trying to save spark dataframe to Delta lake using JDBC driver and pyspark using this code:#Spark session spark_session = SparkSession.builder \ .appName("RCT-API") \ .config("spark.metrics.namespace", "rct-a...

  • 2 kudos
3 More Replies
DataFarmer
by New Contributor II
  • 131 Views
  • 1 replies
  • 0 kudos

Data Warehouse in Databricks Date values as date or int: what is recommended?

In  relational data warehouse systems it was best practise to represent date values as YYYYMMDD integer type values in tables. Date comparison could be done easily without using date-functions and with low performance impact.Is this still the recomme...

  • 131 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 0 kudos

Hi @DataFarmer I Databricks I will advise you to use date type instead of int, this will make your life much simpler while working on the date type data.

  • 0 kudos
Kroy
by Contributor
  • 475 Views
  • 3 replies
  • 2 kudos

Not able to create SQL warehouse cluster in free subscription

I have taken a free subscription to azure databricks, but when try to create 2x small warehouse clusture, getting below error, help appreciated. 

Kroy_0-1702694045718.png
  • 475 Views
  • 3 replies
  • 2 kudos
Latest Reply
TimJB
New Contributor
  • 2 kudos

Can somebody please answer this? I'm having the same issue. 

  • 2 kudos
2 More Replies
florent
by New Contributor III
  • 1586 Views
  • 7 replies
  • 6 kudos

Resolved! it's possible to deliver a sql dashboard created in a Dev workspace to a Prod workspace?

In order to create a ci/cd pipeline to deliver dashboards (here monitoring), how to export / import a dashboard created in databricks sql dashboard from one workspace to another?Thanks

  • 1586 Views
  • 7 replies
  • 6 kudos
Latest Reply
miranda_luna_db
Contributor II
  • 6 kudos

Recommendation is to update your legacy dashboard to Lakeview and then leverage built in export/import support.

  • 6 kudos
6 More Replies
Kaizen
by Contributor III
  • 90 Views
  • 1 replies
  • 0 kudos

Command to display all computes available in your workspace

Hi Is there a command you could use to list all computes configured in your workspace (active and non-active).  This would be really helpful for anyone managing the platfrom to pull all the meta data (tags ,etc) and quickly evaluate all the configura...

  • 90 Views
  • 1 replies
  • 0 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 0 kudos

@Kaizen You've got three ways of doing this:- Using REST API (https://docs.databricks.com/api/workspace/clusters/list),- Using CLI (https://github.com/databricks/cli/blob/main/docs/commands.md#databricks-clusters-list---list-all-clusters)- Using Pyth...

  • 0 kudos
Ramakrishnan83
by New Contributor III
  • 113 Views
  • 1 replies
  • 0 kudos

Intermittent SQL Failure on Databricks SQL Warehouse

Team,I did setup a SQL Warehouse Cluster to support request from Mobile devices through REST API. I read through the documentation of concurrent query limit which is 10. But in my scenario I had 5 small clusters and the query monitoring indicated the...

  • 113 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Ramakrishnan83,  Databricks SQL does indeed support concurrent read requests. However, the exact definition of concurrency can vary based on the cluster configuration and workload.By default, Databricks limits the number of concurrent queries per...

  • 0 kudos
pankaj2264
by New Contributor II
  • 1054 Views
  • 2 replies
  • 1 kudos

Using profile_metrics and drift_metrics

Is there any business use-case where profile_metrics and drift_metrics are used by Databricks customers.If so,kindly provide the scenario where to leverage this feature e.g data lineage,table metadata updates.

  • 1054 Views
  • 2 replies
  • 1 kudos
Latest Reply
MohsenJ
New Contributor III
  • 1 kudos

hey @pankaj2264. both profile metric and drift metric tables are created and used by Lakehouse monitoring to assess the performance of your model and data over time or relative to a baseline table. you can find all the relevant information here Intro...

  • 1 kudos
1 More Replies
techuser
by New Contributor III
  • 4391 Views
  • 10 replies
  • 1 kudos

Resolved! Databricks Liquid Cluster

Hi,Is it possible to convert existing delta table with partition having data to clustering? If so can you please suggest the steps required? I tried and searched but couldn't find any. Is it that liquid clustering can be done only for new Delta table...

  • 4391 Views
  • 10 replies
  • 1 kudos
Latest Reply
Raja_Databricks
New Contributor II
  • 1 kudos

Does Liquid Clustering accepts Merge or How Upsert can be done efficiently with Liquid clustered delta table

  • 1 kudos
9 More Replies
rocky5
by New Contributor III
  • 465 Views
  • 1 replies
  • 0 kudos

Resolved! Incorrect results of row_number() function

I wrote simple code:from pyspark.sql import SparkSession from pyspark.sql.window import Window from pyspark.sql.functions import row_number, max import pyspark.sql.functions as F streaming_data = spark.read.table("x") window = Window.partitionBy("BK...

  • 465 Views
  • 1 replies
  • 0 kudos
Latest Reply
ThomazRossito
New Contributor II
  • 0 kudos

Hi,In my opinion the result is correctWhat needs to be noted in the result is that it is sorted by the "Onboarding_External_LakehouseId" column so if there is "BK_AccountApplicationId" with the same code, it will be partitioned into 2 row_numbersJust...

  • 0 kudos
jcozar
by Contributor
  • 675 Views
  • 2 replies
  • 0 kudos

Join multiple streams with watermarks

Hi!I receive three streams from a postgres CDC. These 3 tables, invoices users and products, need to be joined. I want to use a left join with respect the invoices stream. In order to compute correct results and release old states, I use watermarks a...

  • 675 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jcozar, It seems you’re encountering an issue with multiple event time columns in your Spark Structured Streaming join. Let’s break down the problem and find a solution. Event Time Columns: In Spark Structured Streaming, event time is crucia...

  • 0 kudos
1 More Replies
jcozar
by Contributor
  • 820 Views
  • 2 replies
  • 0 kudos

Read Structured Streaming state information

Hi!I am exploring the read state functionality in spark streaming: https://docs.databricks.com/en/structured-streaming/read-state.htmlWhen I start a streaming query like this:  ( ... .writeStream .option("checkpointLocation", f"{CHECKPOIN...

  • 820 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jcozar,  Execute the streaming query again to construct the state schema.Ensure that the checkpoint location (dbfs:/tmp/checkpoints/experiment_2_2) is correct and accessible.

  • 0 kudos
1 More Replies
rocky5
by New Contributor III
  • 177 Views
  • 1 replies
  • 0 kudos

Stream static join with aggregation

Hi,I am trying to make Stream - Static join with aggregation with no luck. I have a streaming table where I am getting events with two nasted arraysID   Array1   Array21     [1,2]     [3,4]I need make two joins to static dictionary tables (without an...

  • 177 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @rocky5,  . You want to perform a stream-static join with aggregation in Databricks SQL, where you have a streaming table with nested arrays and need to join it with static dictionary tables based on IDs contained in those arrays. Here are the ...

  • 0 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 193 Views
  • 1 replies
  • 0 kudos

1 min auto termination

SQL warehouse can auto-terminate after 1 minute, not 5, as in UI. Just run a simple CLI command. Of course, with such a low auto termination, you lose the benefit of CACHE, but for some ad-hoc queries, it is the perfect setup when combined with serve...

1min.png
  • 193 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Honored Contributor
  • 0 kudos

Hi @Hubert-Dudek , Hope you are doing well!  Could you please clarify more on your ask here?  However, from the above details, the SQL warehouse mentioned is auto-terminating after 1 minute of inactivity because the Auto stop is set to 1 minute. Howe...

  • 0 kudos
Noortje
by New Contributor II
  • 554 Views
  • 3 replies
  • 0 kudos

Databricks Looker Studio connector

Hi all! The Databricks Looker Studio connector has now been available for a few weeks. Tested the connector but running into several issues: I am used to working with dynamic queries, so I am able to use date parameters (similar to BigQuery Looker St...

Warehousing & Analytics
BI tool connector
Looker Studio
  • 554 Views
  • 3 replies
  • 0 kudos
Latest Reply
Noortje
New Contributor II
  • 0 kudos

Hi @Kaniz Hope you're doing well! I am very curious about the following thing: However, there might be workarounds or alternative approaches to achieve similar functionality. You could explore using Looker’s native features for dynamic filtering or c...

  • 0 kudos
2 More Replies
Labels
Top Kudoed Authors