cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

MadelynM
by Databricks Employee
  • 1112 Views
  • 0 replies
  • 0 kudos

[Recap] Data + AI Summit 2024 - Warehousing & Analytics | Improve performance and increase insights

Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.  Keynote: Data Warehouse presente...

Screenshot 2024-07-03 at 10.15.26 AM.png
Warehousing & Analytics
AI BI Dashboards
AI BI Genie
Databricks SQL
  • 1112 Views
  • 0 replies
  • 0 kudos
Artem_Y
by Databricks Employee
  • 456 Views
  • 0 replies
  • 2 kudos

How to make a sparkline in Databricks dashboards and visualizations

In this post, we'll examine one approach to creating a sparkline in a Databricks Dashboard table. Approach As of writing this post, there is no built-in method of creating sparklines in a table, so we need to explore some workarounds. All workarounds...

Artem_Yevtushen_1-1729030562901.png Artem_Yevtushen_0-1729030495109.png Artem_Yevtushen_2-1729031894756.png
Warehousing & Analytics
bi
dashboard
Visualization
  • 456 Views
  • 0 replies
  • 2 kudos
BillSundwall
by New Contributor II
  • 976 Views
  • 1 replies
  • 1 kudos

Maps in AI/BI Dashboards?

Is there any official word as to when we can expect Chloropleth or Marker Map visuals in AI/BI Dashboards? I realize Legacy Dashboards are still supported, but it feels uncertain to build new ones with AI/BI Dashboards in GA.

  • 976 Views
  • 1 replies
  • 1 kudos
Latest Reply
eason_gao_db
Databricks Employee
  • 1 kudos

Great question! The team is actively working on supporting Maps in AI/BI Dashboards. You can expect Marker Map to be available in a few weeks and Choropleth to come early next year. 

  • 1 kudos
prasadvaze
by Valued Contributor II
  • 10174 Views
  • 11 replies
  • 10 kudos

Azure Synapse versus databricks SQL endpoint performance comparison

Has anyone done this and share details? I have a sample sql which ran on large SQL endpoint in 8min and synapse 1000DWU setting in 1hr. On small SQL endpoint it took 34min. What's the equivalent SQL Endpoint compute for Synapse@1000DWU? I know th...

  • 10174 Views
  • 11 replies
  • 10 kudos
Latest Reply
arslanapk99
New Contributor II
  • 10 kudos

the cost vary 

  • 10 kudos
10 More Replies
cristianc
by Contributor
  • 845 Views
  • 2 replies
  • 0 kudos

How does the refresh work for AI/BI (formerly LakeView Dashboards)

Greetings,I'm writing this message because I want to understand how does the "automatic refresh" feature work for AI/BI dashboards that use SQL Serverless endpoints?I'm asking because sometimes the published dashboard refreshes when viewing the link ...

  • 845 Views
  • 2 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

The dashboard will refresh under conditions such as Manual refresh, scheduled refresh or if a parameter in the refresh changes it will hit the refresh option

  • 0 kudos
1 More Replies
151640
by New Contributor III
  • 726 Views
  • 2 replies
  • 0 kudos

Databricks JDBC driver. Databasemetadata.getColumns does not return columns of VARIANT type

Resultset returned by DatabaseMetadata.getColumns does not include the variant column in a table. Only includes the non-variant column.Databricks JDBC driver 02.06.40.1071create table tvariant(rnum int, c1 variant);

  • 726 Views
  • 2 replies
  • 0 kudos
Latest Reply
filipniziol
Contributor III
  • 0 kudos

Hi @151640 ,According to the documentation VARIANT data type in not supported by Databricks JDBC driver.Here is the list of supported data types: 

  • 0 kudos
1 More Replies
andre_rizzatti
by New Contributor II
  • 2362 Views
  • 3 replies
  • 2 kudos

SQLWarehouse Case INsensitive

Good morning, is there any parameter or configuration that causes all my data to be consulted without case distinction? insensitive?

  • 2362 Views
  • 3 replies
  • 2 kudos
Latest Reply
MarianoRanu
New Contributor II
  • 2 kudos

Hi @raphaelblg ,do you know any update to this or any workaround?Regards,Mariano

  • 2 kudos
2 More Replies
techuser
by New Contributor III
  • 10448 Views
  • 6 replies
  • 1 kudos

Databricks Liquid Cluster

Hi,Is it possible to convert existing delta table with partition having data to clustering? If so can you please suggest the steps required? I tried and searched but couldn't find any. Is it that liquid clustering can be done only for new Delta table...

  • 10448 Views
  • 6 replies
  • 1 kudos
Latest Reply
Raja_Databricks
New Contributor III
  • 1 kudos

Does Liquid Clustering accepts Merge or How Upsert can be done efficiently with Liquid clustered delta table

  • 1 kudos
5 More Replies
Pavan3
by New Contributor II
  • 457 Views
  • 2 replies
  • 0 kudos

Regarding Database location in dbfs

Hi,I have used "SET spark.sql.warehouse.dir" which creates the directory by default.Then I have created the database by command "CREATE DATABASE IF NOT EXISTS database_name;",but when I used "DESCRIBE DATABASE database_name" I could not find the loca...

  • 457 Views
  • 2 replies
  • 0 kudos
Latest Reply
filipniziol
Contributor III
  • 0 kudos

Hi @Pavan3 ,If running DESCRIBE DATABASE the location is empty, then was created in the default CATALOG directory.What you can do is to create any table in that database and run DESCRIBE DETAIL on that table:Hope it helps

  • 0 kudos
1 More Replies
apiury
by New Contributor III
  • 1013 Views
  • 1 replies
  • 0 kudos

Connect NET app to delta table warehouse

Hi! I'm developing a .NET app and i want to use the databricks warehouse as database. I have gold delta tables that i want to query. In the documentation, i can see a ODBC/JDBC driver, are those connector fast? there are another way to connect? what ...

  • 1013 Views
  • 1 replies
  • 0 kudos
Latest Reply
rangu
New Contributor III
  • 0 kudos

We have been using .Net apps connected to Databricks delta tables through Clusters, we have been using ODBC  to achieve this. However we recently hit a roadblock after UC migration, where the UC all purpose cluster started giving issues with queries ...

  • 0 kudos
Aya-Ahmed
by New Contributor II
  • 839 Views
  • 2 replies
  • 0 kudos

Parquet Encryption/Decryption in Databricks

Hi everyone,I'm curious about Databricks' approach to encrypting and decrypting Parquet files. Does Databricks adhere to standard encryption/decryption methods for Parquet? If not, what specific methods or techniques are used?I'd appreciate any insig...

  • 839 Views
  • 2 replies
  • 0 kudos
Latest Reply
Witold
Honored Contributor
  • 0 kudos

Since Databricks uses Spark, you should be able to use e.g. Columnar EncryptionBesides, you can look into this and aes-specific functions.

  • 0 kudos
1 More Replies
AndrejZ
by New Contributor
  • 692 Views
  • 1 replies
  • 0 kudos

Shared Parameters between queries on a dashboard

I would like to create a simple governance dashboard with multiple queries (a query to see user login events, a query to see sql statements ran, a query for jobs executed, etc.)What i would like to do is have a single user name parameter which would ...

  • 692 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Yes you can set dashboard parameters so you provide the username on the parameter or widget and it gets distributed to the different queries https://docs.databricks.com/en/dashboards/parameters.html  

  • 0 kudos
qwerty3
by Contributor
  • 2885 Views
  • 21 replies
  • 3 kudos

Spark dataframe performing poorly

I have huge datasets, transformation, display, print, show are working well on this data when read in a pandas dataframe. But the same dataframe when converted to a spark dataframe, is taking minutes to display even a single row and hours to write th...

  • 2885 Views
  • 21 replies
  • 3 kudos
Latest Reply
gchandra
Databricks Employee
  • 3 kudos

I understand you want it sooner. Did it at least write the data in 10 minutes compared to not writing before? There are more knobs you can tweak like  spark.sql.shuffle.partitions=auto Do you have any index columns in your spatial data that can be us...

  • 3 kudos
20 More Replies
qwerty3
by Contributor
  • 556 Views
  • 3 replies
  • 0 kudos

Unable to obtain count of dataframe

I am unable to obtain a count of a dataframe, it always get stuck at 1 stage, I have tried reducing the size, what can be the issue? How can I read cluster logs to identify the issue? 

  • 556 Views
  • 3 replies
  • 0 kudos
Latest Reply
qwerty3
Contributor
  • 0 kudos

Driver memory is good enough, it is able to handle 90 lakhs data, what I am giving it is definitely less than that, what can I do about skewed data and shuffling?

  • 0 kudos
2 More Replies
Aminsnh
by New Contributor
  • 360 Views
  • 0 replies
  • 0 kudos

Adding customized shortcut keys

Hi all, I need to add a shortcut key for R's pip operator (%>%) to my Databricks notebook. I want the operator to be written in my code snippet when I hold down the shortcut keys (shift + ctrl + m). Is there a straightforward way to add such shortcut...

  • 360 Views
  • 0 replies
  • 0 kudos
JS_L
by New Contributor II
  • 682 Views
  • 2 replies
  • 1 kudos

ERROR: key not found in SQL when trying to pass the result of a CTE as a function parameter

Hi Community,I try to pass the result of a CTE as a function parameter as code below WITH t1 AS ( SELECT array_join(collect_list(output), ',') AS x FROM my_catalog.my_db.get_x(:startTime, :endTime) ) SELECT 'AM_offline' as Type, CASE WHEN off...

  • 682 Views
  • 2 replies
  • 1 kudos
Latest Reply
JS_L
New Contributor II
  • 1 kudos

Hi @szymon_dybczak Thanks for replying. I don't the issue is related to datatype, since the query works if I pass the subquery to _x parameter without CTE.Please see as below code:SELECT 'AM_offline' as Type, CASE WHEN offline_ratio > 1.5 THEN 'no-Go...

  • 1 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels
Top Kudoed Authors