cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

MadelynM
by Databricks Employee
  • 3106 Views
  • 0 replies
  • 0 kudos

[Recap] Data + AI Summit 2024 - Warehousing & Analytics | Improve performance and increase insights

Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.  Keynote: Data Warehouse presente...

Screenshot 2024-07-03 at 10.15.26 AM.png
Warehousing & Analytics
AI BI Dashboards
AI BI Genie
Databricks SQL
  • 3106 Views
  • 0 replies
  • 0 kudos
pankaj2264
by New Contributor II
  • 2951 Views
  • 2 replies
  • 1 kudos

Using profile_metrics and drift_metrics

Is there any business use-case where profile_metrics and drift_metrics are used by Databricks customers.If so,kindly provide the scenario where to leverage this feature e.g data lineage,table metadata updates.

  • 2951 Views
  • 2 replies
  • 1 kudos
Latest Reply
MohsenJ
Contributor
  • 1 kudos

hey @pankaj2264. both profile metric and drift metric tables are created and used by Lakehouse monitoring to assess the performance of your model and data over time or relative to a baseline table. you can find all the relevant information here Intro...

  • 1 kudos
1 More Replies
rocky5
by New Contributor III
  • 4268 Views
  • 1 replies
  • 0 kudos

Resolved! Incorrect results of row_number() function

I wrote simple code:from pyspark.sql import SparkSession from pyspark.sql.window import Window from pyspark.sql.functions import row_number, max import pyspark.sql.functions as F streaming_data = spark.read.table("x") window = Window.partitionBy("BK...

  • 4268 Views
  • 1 replies
  • 0 kudos
Latest Reply
ThomazRossito
Contributor
  • 0 kudos

Hi,In my opinion the result is correctWhat needs to be noted in the result is that it is sorted by the "Onboarding_External_LakehouseId" column so if there is "BK_AccountApplicationId" with the same code, it will be partitioned into 2 row_numbersJust...

  • 0 kudos
jcozar
by Contributor
  • 3752 Views
  • 1 replies
  • 0 kudos

Join multiple streams with watermarks

Hi!I receive three streams from a postgres CDC. These 3 tables, invoices users and products, need to be joined. I want to use a left join with respect the invoices stream. In order to compute correct results and release old states, I use watermarks a...

  • 3752 Views
  • 1 replies
  • 0 kudos
rocky5
by New Contributor III
  • 2220 Views
  • 0 replies
  • 0 kudos

Stream static join with aggregation

Hi,I am trying to make Stream - Static join with aggregation with no luck. I have a streaming table where I am getting events with two nasted arraysID   Array1   Array21     [1,2]     [3,4]I need make two joins to static dictionary tables (without an...

  • 2220 Views
  • 0 replies
  • 0 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 3072 Views
  • 1 replies
  • 2 kudos

1 min auto termination

SQL warehouse can auto-terminate after 1 minute, not 5, as in UI. Just run a simple CLI command. Of course, with such a low auto termination, you lose the benefit of CACHE, but for some ad-hoc queries, it is the perfect setup when combined with serve...

1min.png
  • 3072 Views
  • 1 replies
  • 2 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 2 kudos

Hi @Hubert-Dudek , Hope you are doing well!  Could you please clarify more on your ask here?  However, from the above details, the SQL warehouse mentioned is auto-terminating after 1 minute of inactivity because the Auto stop is set to 1 minute. Howe...

  • 2 kudos
Noortje
by New Contributor II
  • 2986 Views
  • 2 replies
  • 0 kudos

Databricks Looker Studio connector

Hi all! The Databricks Looker Studio connector has now been available for a few weeks. Tested the connector but running into several issues: I am used to working with dynamic queries, so I am able to use date parameters (similar to BigQuery Looker St...

Warehousing & Analytics
BI tool connector
Looker Studio
  • 2986 Views
  • 2 replies
  • 0 kudos
Latest Reply
Noortje
New Contributor II
  • 0 kudos

Hi @Retired_mod Hope you're doing well! I am very curious about the following thing: However, there might be workarounds or alternative approaches to achieve similar functionality. You could explore using Looker’s native features for dynamic filterin...

  • 0 kudos
1 More Replies
Laurens
by New Contributor II
  • 4814 Views
  • 2 replies
  • 0 kudos

Setting up a snowflake catalog via spark config next to unity catalog

Im trying to set up a connection to Iceberg on S3 via Snowflake as described https://medium.com/snowflake/how-to-integrate-databricks-with-snowflake-managed-iceberg-tables-7a8895c2c724 and https://docs.snowflake.com/en/user-guide/tables-iceberg-catal...

Warehousing & Analytics
catalog
config
snowflake
spark
Unity Catalog
  • 4814 Views
  • 2 replies
  • 0 kudos
Latest Reply
Laurens
New Contributor II
  • 0 kudos

Hi @Retired_mod ,We've been working on setting up Glue as catalog, which is working fine so far. However, Glue takes place of the hive_metastore, which appears to be a legacy way of setting this up. Is the way proposed here the recommended way to set...

  • 0 kudos
1 More Replies
Carsten03
by New Contributor III
  • 4009 Views
  • 2 replies
  • 0 kudos

Permission Error When Running DELETE FROM

Hi,I want to remove duplicate rows from my managed delta table in my unity catalog. I use a query on a SQL warehouse similar to this:  WITH cte AS ( SELECT id, ROW_NUMBER() OVER (PARTITION BY id,##,##,## ORDER BY ts) AS row_num FROM catalog.sch...

  • 4009 Views
  • 2 replies
  • 0 kudos
Latest Reply
Carsten03
New Contributor III
  • 0 kudos

I have first tried to use _metadata.row_index to delete the correct rows but also this resulted in an error. My solution was now to use spark and overwrite the table.table_name = "catalog.schema.table" df = spark.read.table(table_name) count_df = df....

  • 0 kudos
1 More Replies
Priyam1
by New Contributor III
  • 4696 Views
  • 1 replies
  • 0 kudos

databricks notebook cell doesn't show the output intermittently

Recently, it seems that there has been an intermittent issue where the output of a notebook cell doesn't display, even though the code within the cell executes successfully. For instance, there are times when simply printing a dataframe yields no out...

  • 4696 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Do you see the output in stdout logfile in such a scenario?

  • 0 kudos
Linglin
by New Contributor III
  • 5365 Views
  • 2 replies
  • 0 kudos

How to pass multiple Value to a dynamic Variable in Dashboard underlying SQL

select         {{user_defined_variable}} as my_var,                   count(*) as cntfrom            my_tablewhere         {{user_defined_variable}} = {{value}} for user_defined_variable, I use query based dropdown list to get a column_name I'd like ...

  • 5365 Views
  • 2 replies
  • 0 kudos
primaj
by New Contributor III
  • 14502 Views
  • 14 replies
  • 9 kudos

Introspecting catalogs and schemas JDBC in Pycharm

Hey,I've managed to add my SQL Warehouse as a data source in Pycharm using the JDBC driver and can query the warehouse from an SQL console within Pycharm. This is great, however, what I'm struggling with is getting the catalogs and schemas to show in...

  • 14502 Views
  • 14 replies
  • 9 kudos
Latest Reply
gem7318
New Contributor II
  • 9 kudos

You need to explicitly tell your JetBrains tool to introspect the database using JDBC metadata.I think the reason it (sometimes) works in Datagrip but not PyCharm, IntelliJ, etc is because the default settings can be different across tools and even v...

  • 9 kudos
13 More Replies
Jennifer
by New Contributor III
  • 10490 Views
  • 3 replies
  • 0 kudos

How do I write dataframe to s3 without partition column name on the path

I am currently trying to write a dataframe to s3 likedf.write.partitionBy("col1","col2").mode("overwrite").format("json").save("s3a://my_bucket/")The path becomes `s3a://my_bucket/col1=abc/col2=opq/`But I want to path to be `s3a://my_bucket/abc/opq/`...

  • 10490 Views
  • 3 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

Hi @Jennifer , The default behavior of the .partitionBy() function in Spark is to create a directory structure with partition column names. This is similar to Hive's partitioning scheme and is done for optimization purposes. Hence, you cannot directl...

  • 0 kudos
2 More Replies
96286
by Contributor
  • 4862 Views
  • 3 replies
  • 0 kudos

Enabling serverless type for SQL warehouse running on Google Cloud Platform

I am in the process of connecting Looker to one of my Databricks databases. To reduce startup time on my SQL warehouse cluster I would like to change the type from "Pro" to "Serverless". I cannot find a way to do that and "Serverless" is not an optio...

Warehousing & Analytics
GCP
serverless
sql
warehouse
  • 4862 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kayla
Valued Contributor II
  • 0 kudos

Echoing glawry - I'd be fascinated to know if this "Ephemeral clusters" are a thing.

  • 0 kudos
2 More Replies
Anuroop
by New Contributor II
  • 2903 Views
  • 2 replies
  • 1 kudos

Ticket

Hi Khishore,​Please help me how you raised ticket for the certificate ​issue.​Thanks,Anuroop​

  • 2903 Views
  • 2 replies
  • 1 kudos
Latest Reply
AshR
Contributor
  • 1 kudos

Please submit a ticket to our Training Team here: https://help.databricks.com/s/contact-us?ReqType=training

  • 1 kudos
1 More Replies