cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

LightUp
by New Contributor III
  • 6875 Views
  • 3 replies
  • 4 kudos

Converting SQL Code to SQL Databricks

I am new to Databricks. Please excuse my ignorance. My requirement is to convert the SQL query below into Databricks SQL. The query comes from EventLog table and the output of the query goes into EventSummaryThese queries can be found hereCREATE TABL...

image
  • 6875 Views
  • 3 replies
  • 4 kudos
Latest Reply
thelogicplus
New Contributor III
  • 4 kudos

you may explore the tool and services from Travinto Technologies . They have very good tools. We had explored their tool for our code coversion from  Informatica, Datastage and abi initio to DATABRICKS , pyspark. Also we used for SQL queries, stored ...

  • 4 kudos
2 More Replies
dimsh
by Contributor
  • 14136 Views
  • 13 replies
  • 10 kudos

How to overcome missing query parameters in Databricks SQL?

Hi, there! I'm trying to build up my first dashboard based on Dataabricks SQL. As far as I can see if you define a query parameter you can't skip it further. I'm looking for any option where I can make my parameter optional. For instance, I have a ta...

  • 14136 Views
  • 13 replies
  • 10 kudos
Latest Reply
techg
New Contributor II
  • 10 kudos

Is there any solution for the above mentioned post?

  • 10 kudos
12 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 4581 Views
  • 4 replies
  • 5 kudos

Support of running multiple cells at a time in databricks notebook Hi all,Now databricks notebook supports parallel run of commands in a single notebo...

Support of running multiple cells at a time in databricks notebookHi all,Now databricks notebook supports parallel run of commands in a single notebook that will help run ad hoc queries simultaneously without creating a separate notebook.Once you run...

image.png image
  • 4581 Views
  • 4 replies
  • 5 kudos
Latest Reply
SenthilRT
New Contributor III
  • 5 kudos

Can we run parallel cell execution for python (pyspark) cells?

  • 5 kudos
3 More Replies
mickniz
by Contributor
  • 24945 Views
  • 8 replies
  • 18 kudos

cannot import name 'sql' from 'databricks'

I am working on Databricks version 10.4 premium cluster and while importing sql from databricks module I am getting below error. cannot import name 'sql' from 'databricks' (/databricks/python/lib/python3.8/site-packages/databricks/__init__.py).Trying...

  • 24945 Views
  • 8 replies
  • 18 kudos
Latest Reply
ameet9257
New Contributor III
  • 18 kudos

if you ever received this kind of error after installing the correct Python package then try running the below command. dbutils.library.restartPython()

  • 18 kudos
7 More Replies
noimeta
by Contributor III
  • 4895 Views
  • 9 replies
  • 4 kudos

Resolved! Databricks SQL: catalog of each query

Currently, we are migrating from hive metastore to UC. We have several dashboards and a huge number of queries whose catalogs have been set to hive_metastore and using <db>.<table> access pattern.I'm just wondering if there's a way to switch catalogs...

  • 4895 Views
  • 9 replies
  • 4 kudos
Latest Reply
h_h_ak
Contributor
  • 4 kudos

May you can also have a look here, if you need hot fix  https://github.com/databrickslabs/ucx 

  • 4 kudos
8 More Replies
Data_Engineer3
by Contributor III
  • 2791 Views
  • 5 replies
  • 0 kudos

Default maximum spark streaming chunk size in delta files in each batch?

working with delta files spark structure streaming , what is the maximum default chunk size in each batch?How do identify this type of spark configuration in databricks?#[Databricks SQL]​ #[Spark streaming]​ #[Spark structured streaming]​ #Spark​ 

  • 2791 Views
  • 5 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

doc - https://docs.databricks.com/en/structured-streaming/delta-lake.html  Also, what is the challenge while using foreachbatch?

  • 0 kudos
4 More Replies
merca
by Valued Contributor II
  • 9077 Views
  • 12 replies
  • 7 kudos

Value array {{QUERY_RESULT_ROWS}} in Databricks SQL alerts custom template

Please include in documentation an example how to incorporate the `QUERY_RESULT_ROWS` variable in the custom template.

  • 9077 Views
  • 12 replies
  • 7 kudos
Latest Reply
CJK053000
New Contributor III
  • 7 kudos

Databricks confirmed this was an issue on their end and it should be resolved now. It is working for me.

  • 7 kudos
11 More Replies
Ericsson
by New Contributor II
  • 3013 Views
  • 3 replies
  • 1 kudos

SQL week format issue its not showing result as 01(ww)

Hi Folks,I've requirement to show the week number as ww format. Please see the below codeselect weekofyear(date_add(to_date(current_date, 'yyyyMMdd'), +35)). also plz refre the screen shot for result.

result
  • 3013 Views
  • 3 replies
  • 1 kudos
Latest Reply
Feltonrolfson
New Contributor II
  • 1 kudos

It seems you're encountering an issue with SQL week formatting, where the results aren't displaying as expected (01(ww)). This could impact data analysis. monkey mart

  • 1 kudos
2 More Replies
Graham
by New Contributor III
  • 4727 Views
  • 4 replies
  • 3 kudos

Resolved! Inline comment next to un-tickmarked SET statement = Syntax error

Running this code in databricks SQL works great:SET USE_CACHED_RESULT = FALSE;   -- Result: -- key value -- USE_CACHED_RESULT FALSEIf I add an inline comment, however, I get a syntax error:SET USE_CACHED_RESUL...

  • 4727 Views
  • 4 replies
  • 3 kudos
Latest Reply
rafal_walisko
New Contributor II
  • 3 kudos

Hi, I'm getting the same error when trying to execute statement through API "statement": "SET `USE_CACHED_RESULT` = FALSE; SELECT COUNT(*) FROM TABLE" Every combination fail  "status": { "state": "FAILED", "error": { "e...

  • 3 kudos
3 More Replies
VVM
by New Contributor III
  • 15926 Views
  • 13 replies
  • 3 kudos

Resolved! Databricks SQL - Unable to Escape Dollar Sign ($) in Column Name

It seems that due to how Databricks processes SQL cells, it's impossible to escape the $ when it comes to a column name.I would expect the following to work:%sql SELECT 'hi' `$id`The backticks ought to escape everything. And indeed that's exactly wha...

  • 15926 Views
  • 13 replies
  • 3 kudos
Latest Reply
Pfizer
New Contributor II
  • 3 kudos

What is the status of this bug? This is affecting user experience.  

  • 3 kudos
12 More Replies
yopbibo
by Contributor II
  • 19537 Views
  • 4 replies
  • 4 kudos

How can I connect to an Azure SQL db from a Databricks notebook?

I know how to do it with spark, and read/write tables (like https://docs.microsoft.com/en-gb/azure/databricks/data/data-sources/sql-databases#python-example )But this time, I need to only update a field of a specific row in a table. I do not think I ...

  • 19537 Views
  • 4 replies
  • 4 kudos
Latest Reply
yopbibo
Contributor II
  • 4 kudos

thanks for the link.I am maybe wrong, but they describe how to connect with spark. They do not provide a connection engine that we could use directly (like with pyodbc) or an engine that we could use in pandas, for example.

  • 4 kudos
3 More Replies
isaac_gritz
by Databricks Employee
  • 3526 Views
  • 5 replies
  • 5 kudos

SQL IDE Support

How to use a SQL IDE with Databricks SQLDatabricks provides SQL IDE support using DataGrip and DBeaver with Databricks SQL.Let us know in the comments if you've used DataDrip or DBeaver with Databricks! Let us know if there are any other SQL IDEs you...

  • 3526 Views
  • 5 replies
  • 5 kudos
Latest Reply
Jag
New Contributor III
  • 5 kudos

dbeaver is perfectly working fine but I fount one issue it wont show the correct error for query. 

  • 5 kudos
4 More Replies
sp1
by New Contributor II
  • 13554 Views
  • 5 replies
  • 4 kudos

Resolved! Pass date value as parameter in Databricks SQL notebook

I want to pass yesterday date (In the example 20230115*.csv) in the csv file. Don't know how to create parameter and use it here.CREATE OR REPLACE TEMPORARY VIEW abc_delivery_logUSING CSVOPTIONS ( header="true", delimiter=",", inferSchema="true", pat...

  • 13554 Views
  • 5 replies
  • 4 kudos
Latest Reply
Asifpanjwani
New Contributor II
  • 4 kudos

@Retired_mod @sp1 @Chaitanya_Raju @daniel_sahal Hi Everyone,I need the same scenario on SQL code, because my DBR cluster not allowed me to run python codeError: Unsupported cell during execution. SQL warehouses only support executing SQL cells.I appr...

  • 4 kudos
4 More Replies
vanepet
by New Contributor II
  • 16486 Views
  • 5 replies
  • 2 kudos

Is it possible to use multiprocessing or threads to submit multiple queries to a database from Databricks in parallel?

We are trying to improve our overall runtime by running queries in parallel using either multiprocessing or threads. What I am seeing though is that when the function that runs this code is run on a separate process it doesnt return a dataFrame with...

  • 16486 Views
  • 5 replies
  • 2 kudos
Latest Reply
BapsDBS
New Contributor II
  • 2 kudos

Thanks for the links mentioned above. But both of them uses raw python to achieve parallelism. Does this mean Spark (read PySpark) does exactly provisions for parallel execution of functions or even notebooks ? We used a wrapper notebook with ThreadP...

  • 2 kudos
4 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 1823 Views
  • 2 replies
  • 7 kudos

docs.databricks.com

Rename and drop columns with Delta Lake column mapping. Hi all,Now databricks started supporting column rename and drop.Column mapping requires the following Delta protocols:Reader version 2 or above.Writer version 5 or above.Blog URL##Available in D...

  • 1823 Views
  • 2 replies
  • 7 kudos
Latest Reply
Poovarasan
New Contributor III
  • 7 kudos

Above mentioned feature is not working in the DLT pipeline. if the scrip has more than 4 columns 

  • 7 kudos
1 More Replies
Labels