cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

MadelynM
by Databricks Employee
  • 2931 Views
  • 0 replies
  • 0 kudos

[Recap] Data + AI Summit 2024 - Warehousing & Analytics | Improve performance and increase insights

Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.  Keynote: Data Warehouse presente...

Screenshot 2024-07-03 at 10.15.26 AM.png
Warehousing & Analytics
AI BI Dashboards
AI BI Genie
Databricks SQL
  • 2931 Views
  • 0 replies
  • 0 kudos
amelia1
by New Contributor II
  • 2141 Views
  • 1 replies
  • 0 kudos

Local pyspark read data using jdbc driver returns column names only

Hello,I have an Azure sql warehouse serverless instance that I can connect to using databricks-sql-connector. But, when I try to use pyspark and jdbc driver url, I can't read or write.See my code belowdef get_jdbc_url(): # Define your Databricks p...

  • 2141 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

The error does not look specific to the warehouse that you are connecting to. The error message "Unrecognized conversion specifier [msg] starting at position 54 in conversion pattern" indicates that there is an issue with the logging configuration in...

  • 0 kudos
harveychun
by New Contributor II
  • 1963 Views
  • 3 replies
  • 0 kudos

Measures or KPI

With Databricks BI, Is there anyway to create a KPI or Measures that can be used in a visual? If so, how to achieved that one?

  • 1963 Views
  • 3 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Can you please provide some more context on your current usecase?

  • 0 kudos
2 More Replies
OfirM
by New Contributor
  • 722 Views
  • 1 replies
  • 0 kudos

spark.databricks.optimizer.replaceWindowsWithAggregates.enabled

I have seen in the release notes of 15.3 that this was introduced and couldn't wrap my head around it.Does someone has an example of a plan before and after?Quote:Performance improvement for some window functionsThis release includes a change that im...

  • 722 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Before Optimization: Consider a query that calculates the sum of a column value partitioned by category without an ORDER BY clause or a window_frame parameter:   SELECT category, SUM(value) OVER (PARTITION BY category) AS total_value FROM sales;  ...

  • 0 kudos
rk2511
by New Contributor
  • 1071 Views
  • 1 replies
  • 0 kudos

Access Each Input Item of a For Each Task

I have two tasks. The first task (Sample_Notebook) returns a JSON array (Input_List). Sample data in Input_List['key1':value1, 'key2':value2, 'key3':value3]The second task is a "For Each" task that executes a notebook for each entry in the Input_List...

  • 1071 Views
  • 1 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

To access each item of the iteration within the notebook of the second task in your Databricks workflow, you need to utilize the parameterization feature of the For Each task. Instead of trying to retrieve the entire list using dbutils.jobs.taskValue...

  • 0 kudos
hank12345
by New Contributor
  • 3832 Views
  • 2 replies
  • 0 kudos

Resolved! Lakehouse federation support for Oracle DB

https://docs.databricks.com/en/query-federation/index.htmlAre there plans to provide Oracle support for Databricks on AWS lakehouse federation? Not sure if that's possible or not. Thanks!

  • 3832 Views
  • 2 replies
  • 0 kudos
Latest Reply
PiotrU
Contributor II
  • 0 kudos

Federation with Oracle is available https://learn.microsoft.com/en-us/azure/databricks/query-federation/oracle

  • 0 kudos
1 More Replies
maoruales32
by New Contributor
  • 352 Views
  • 1 replies
  • 0 kudos

Point map datapoints labels

Point map visualization datapoints labels do not let me input a specific column on them

  • 352 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Previously replied in https://community.databricks.com/t5/warehousing-analytics/datapoints-labels-on-a-point-map-visualization/td-p/101839 

  • 0 kudos
aburkh
by New Contributor
  • 652 Views
  • 1 replies
  • 0 kudos

User default timezone (SQL)

Users get confused when querying data with timestamps because UTC is not intuitive for many. It is possible to set TIME ZONE at query level or at SQL Warehouse level, but those options fail to address the need of multiple users working on the same wa...

  • 652 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

it is possible to set the time zone at the session level using the SET TIME ZONE statement in Databricks SQL. This allows users to control the local timezone used for timestamp operations within their session. However, there is no direct option of us...

  • 0 kudos
atikiwala
by New Contributor II
  • 3655 Views
  • 1 replies
  • 0 kudos

Resolved! Working with Databricks Apps

I'm trying to use Databricks Apps to host a Streamlit app to serve some interactive applicaton. I face two limitations:1. In environment for App I see it using certain python version, but how to update it to use any other version?It is already set to...

atikiwala_0-1732008401891.png atikiwala_1-1732008777398.png
  • 3655 Views
  • 1 replies
  • 0 kudos
Latest Reply
parthSundarka
Databricks Employee
  • 0 kudos

Hi @atikiwala , Good Day! Python 3.11 is currently the only version we support. We are thinking of adding additional options in the future. Would love to hear your feedback on this - https://docs.databricks.com/en/resources/ideas.html#submit-product-...

  • 0 kudos
igorstar
by New Contributor III
  • 6059 Views
  • 3 replies
  • 2 kudos

Resolved! What is the difference between LIVE TABLE and MATERIALIZED VIEW?

From the DLT documentation it seems that the LIVE TABLE is conceptually the same as MATERIALIZED VIEW. When should I use one over another?

  • 6059 Views
  • 3 replies
  • 2 kudos
Latest Reply
Mo
Databricks Employee
  • 2 kudos

@ImranA and @igorstar  I repost my response here again:to create materialized views, you could use CREATE OR REFRESH LIVE TABLE however according to the official docs: The CREATE OR REFRESH LIVE TABLE syntax to create a materialized view is deprecat...

  • 2 kudos
2 More Replies
AnnaP
by New Contributor II
  • 1587 Views
  • 1 replies
  • 0 kudos

[UNBOUND_SQL_PARAMETER] error

Hi, I'd appreciate it if anyone could help!We are using offical ODBC driver (Simba Spark ODBC Driver 64-bit 2.08.02.1013) in our application. C++ APIs.All the following SQL statements are passed through ODBC API to Databricks:successully executing: C...

  • 1587 Views
  • 1 replies
  • 0 kudos
Latest Reply
PiotrMi
Contributor
  • 0 kudos

@AnnaP Hey,Did you try below:To disable the SQL Connector feature, select the Use Native Query check box.Important:l When this option is enabled, the connector cannot executeparameterized queries.l By default, the connector applies transformations to...

  • 0 kudos
Akshay_Petkar
by Contributor III
  • 1756 Views
  • 1 replies
  • 2 kudos

How to Create a Live Streaming Dashboard on Databricks?

I am working on a use case where I have streaming data that needs to be displayed in real-time on a live dashboard. The goal is for any new data arriving in the stream to instantly reflect on the dashboard. Is this possible on Databricks? If yes, how...

  • 1756 Views
  • 1 replies
  • 2 kudos
Latest Reply
christopher356
New Contributor II
  • 2 kudos

@Akshay_Petkar wrote:I am working on a use case where I have streaming data that needs to be displayed in real-time on a live dashboard. The goal is for any new data arriving in the stream to instantly reflect on the dashboard. Is this possible on Da...

  • 2 kudos
User16753724663
by Valued Contributor
  • 14870 Views
  • 5 replies
  • 3 kudos

Resolved! Unable to use CX_Oracle library in notebook

While using cx_oracle python library, it returns the below error:   error message: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory     The cx_oracle library is dependent on native...

  • 14870 Views
  • 5 replies
  • 3 kudos
Latest Reply
ovbieAmen
New Contributor II
  • 3 kudos

Hi @AshvinManoj  I used your script and still get same error.   sudo echo 'LD_LIBRARY_PATH="/dbfs/databricks/instantclient_23_6"' >> /databricks/spark/conf/spark-env.shsudo echo 'ORACLE_HOME="/dbfs/databricks/instantclient_23_6"' >> /databricks/spark...

  • 3 kudos
4 More Replies
mbhakta
by New Contributor II
  • 6583 Views
  • 3 replies
  • 2 kudos

Change Databricks Connection on Power BI (service)

We're creating a report with Power BI using data from our AWS Databricks workspace. Currently, I can view the report on Power BI (service) after publishing. Is there a way to change the data source connection, e.g. if I want to change the data source...

  • 6583 Views
  • 3 replies
  • 2 kudos
Latest Reply
J_C
New Contributor II
  • 2 kudos

In the Power BI transform data view, you should be able to access the M-Query code and actually change the server and the host directly. My recommendation is to create a couple of parameters to keep this info for all your queries. Then you can just c...

  • 2 kudos
2 More Replies
anardinelli
by Databricks Employee
  • 770 Views
  • 0 replies
  • 2 kudos

How do I dimension my DBSQL warehouse correctly?

What is the optimal number of cluster/nodes in a warehouse? Depends on your workload. Our DBSQL guide suggests a size range on two main things: time to execute your query and bytes spilled from it. This link can help you understand better optimizatio...

  • 770 Views
  • 0 replies
  • 2 kudos