cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Priyam1
by New Contributor III
  • 53 Views
  • 1 replies
  • 0 kudos

databricks notebook cell doesn't show the output intermittently

Recently, it seems that there has been an intermittent issue where the output of a notebook cell doesn't display, even though the code within the cell executes successfully. For instance, there are times when simply printing a dataframe yields no out...

  • 53 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 0 kudos

Do you see the output in stdout logfile in such a scenario?

  • 0 kudos
Linglin
by New Contributor II
  • 364 Views
  • 3 replies
  • 0 kudos

How to pass multiple Value to a dynamic Variable in Dashboard underlying SQL

select         {{user_defined_variable}} as my_var,                   count(*) as cntfrom            my_tablewhere         {{user_defined_variable}} = {{value}} for user_defined_variable, I use query based dropdown list to get a column_name I'd like ...

  • 364 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Linglin, It seems you’re dealing with user-defined variables in your SQL query, and you want to dynamically set both the column name and the value in your WHERE clause. Let’s break down the solution: Setting User-Defined Variables: You can s...

  • 0 kudos
2 More Replies
Noortje
by New Contributor
  • 99 Views
  • 2 replies
  • 0 kudos

Databricks Looker Studio connector

Hi all! The Databricks Looker Studio connector has now been available for a few weeks. Tested the connector but running into several issues: I am used to working with dynamic queries, so I am able to use date parameters (similar to BigQuery Looker St...

Warehousing & Analytics
BI tool connector
Looker Studio
  • 99 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Noortje, It’s unfortunate that you’re encountering issues with the Databricks Looker Studio connector. Let’s address your questions: Dynamic Queries and Date Parameters: I understand that you’re accustomed to using dynamic queries with date p...

  • 0 kudos
1 More Replies
primaj
by New Contributor III
  • 1884 Views
  • 14 replies
  • 9 kudos

Introspecting catalogs and schemas JDBC in Pycharm

Hey,I've managed to add my SQL Warehouse as a data source in Pycharm using the JDBC driver and can query the warehouse from an SQL console within Pycharm. This is great, however, what I'm struggling with is getting the catalogs and schemas to show in...

  • 1884 Views
  • 14 replies
  • 9 kudos
Latest Reply
gem7318
New Contributor
  • 9 kudos

You need to explicitly tell your JetBrains tool to introspect the database using JDBC metadata.I think the reason it (sometimes) works in Datagrip but not PyCharm, IntelliJ, etc is because the default settings can be different across tools and even v...

  • 9 kudos
13 More Replies
Jennifer
by New Contributor III
  • 479 Views
  • 4 replies
  • 1 kudos

How do I write dataframe to s3 without partition column name on the path

I am currently trying to write a dataframe to s3 likedf.write.partitionBy("col1","col2").mode("overwrite").format("json").save("s3a://my_bucket/")The path becomes `s3a://my_bucket/col1=abc/col2=opq/`But I want to path to be `s3a://my_bucket/abc/opq/`...

  • 479 Views
  • 4 replies
  • 1 kudos
Latest Reply
Sidhant07
New Contributor III
  • 1 kudos

Hi @Jennifer , The default behavior of the .partitionBy() function in Spark is to create a directory structure with partition column names. This is similar to Hive's partitioning scheme and is done for optimization purposes. Hence, you cannot directl...

  • 1 kudos
3 More Replies
Laurens
by New Contributor II
  • 362 Views
  • 2 replies
  • 0 kudos

Setting up a snowflake catalog via spark config next to unity catalog

Im trying to set up a connection to Iceberg on S3 via Snowflake as described https://medium.com/snowflake/how-to-integrate-databricks-with-snowflake-managed-iceberg-tables-7a8895c2c724 and https://docs.snowflake.com/en/user-guide/tables-iceberg-catal...

Warehousing & Analytics
catalog
config
snowflake
spark
Unity Catalog
  • 362 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Laurens, Integrating Iceberg tables on S3 with Snowflake can be done effectively, even when using the Unity Catalog. Let’s break down the steps to achieve this: Configure an External Volume to Amazon S3: Set up an external volume that points...

  • 0 kudos
1 More Replies
96286
by Contributor
  • 1011 Views
  • 4 replies
  • 1 kudos

Enabling serverless type for SQL warehouse running on Google Cloud Platform

I am in the process of connecting Looker to one of my Databricks databases. To reduce startup time on my SQL warehouse cluster I would like to change the type from "Pro" to "Serverless". I cannot find a way to do that and "Serverless" is not an optio...

Warehousing & Analytics
GCP
serverless
sql
warehouse
  • 1011 Views
  • 4 replies
  • 1 kudos
Latest Reply
Kayla
Contributor
  • 1 kudos

Echoing glawry - I'd be fascinated to know if this "Ephemeral clusters" are a thing.

  • 1 kudos
3 More Replies
as5
by New Contributor
  • 197 Views
  • 1 replies
  • 0 kudos

SQL Warehouse - increasing concurrent queries limit

Hello everyone,I would like to inquire about the possibility of increasing the default limit of concurrent queries on the cluster which is set to 10.While researching this topic, I noticed that there is no official documentation available regarding t...

Warehousing & Analytics
ConcurrentQueries
warehouse
  • 197 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @as5, Let’s delve into the details of increasing the concurrent queries limit for SQL warehouse clusters in Databricks. Default Concurrent Queries Limit: By default, Databricks limits the number of concurrent queries per cluster assigned to a ...

  • 0 kudos
chari
by Contributor
  • 450 Views
  • 1 replies
  • 1 kudos

Resolved! What is databricks SQL, spark SQL and how are they different from MS SQL ?

Hello Databricks Community,I have a hard time understanding how is Databricks SQL different from microsoft SQL ? Also, why does databricks provide spark SQL ? If you direct me to a well-written webpage or document its of immense help!Thanks,

  • 450 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @chari,  Certainly! Let’s delve into the differences between Databricks SQL and Microsoft SQL Server, as well as the rationale behind Spark SQL in Databricks. Databricks SQL vs. Microsoft SQL Server: Databricks SQL is an integral part of the ...

  • 1 kudos
rushank29
by New Contributor II
  • 90 Views
  • 1 replies
  • 0 kudos

pm4py no visualization

Hi,I am trying to use pm4py library to visualize my data. my code executes perfectly but there is no visualization how can i solve this problem ? There is no error message#processmining # databricks #pm4py

  • 90 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @rushank29, Here are a few steps you can take to address the problem: Check Dependencies: Ensure that you have all the necessary dependencies installed. Sometimes missing or outdated packages can cause issues with visualization libraries.Verif...

  • 0 kudos
557879
by New Contributor
  • 307 Views
  • 1 replies
  • 0 kudos

Tableau connection error - errorCode=180002

Hi,I am trying to connect to databricks from tableau server and facing this error OAuth error response, generally means someone clicked cancel: access_denied (errorCode=180002)I have added it in "app connections" under account console. Any pointers w...

  • 307 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @557879, The OAuth error response you’re encountering—specifically the “access_denied” error with errorCode=180002—typically indicates that the user cancelled the authorization process or denied requested permissions. Let’s troubleshoot this issu...

  • 0 kudos
JustinM
by New Contributor II
  • 669 Views
  • 3 replies
  • 1 kudos

Cannot connect to SQL Warehouse using JDBC connector in Spark

When trying to connect to a SQL warehouse using the JDBC connector with Spark the below error is thrown. Note that connecting directly to a cluster with similar connection parameters works without issue, the error only occurs with SQL Warehouses.py4j...

  • 669 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @JustinM, Check your configuration settings: Ensure that the dbtable configuration is correctly set in your Spark code. The dbtable option should specify the table you want to load from your SQL warehouse.   Update JDBC driver: Make sure you’re us...

  • 1 kudos
2 More Replies
Anuroop
by New Contributor II
  • 525 Views
  • 3 replies
  • 2 kudos

Ticket

Hi Khishore,​Please help me how you raised ticket for the certificate ​issue.​Thanks,Anuroop​

  • 525 Views
  • 3 replies
  • 2 kudos
Latest Reply
AshR
New Contributor III
  • 2 kudos

Please submit a ticket to our Training Team here: https://help.databricks.com/s/contact-us?ReqType=training

  • 2 kudos
2 More Replies
doremon11
by New Contributor
  • 322 Views
  • 1 replies
  • 0 kudos

unable to perform modifications on Table while Using Python UDF in query

Here, we're trying to use the Python UDF inside the query.taking the table as function input converting the table into dataframe performing modification converting the dataframe into table returning the table  How can we create spark context inside U...

  • 322 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @doremon11, Unfortunately, creating a Spark context inside a UDF directly in the query is not possible. The Spark context is a global object and cannot be created within a UDF. UDFs are designed to operate on data within a DataFrame, not to create...

  • 0 kudos