cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shelms
by New Contributor II
  • 6446 Views
  • 3 replies
  • 7 kudos

Resolved! SQL CONCAT returning null

Has anyone else experienced this problem? I'm attempting to SQL concat two fields and if the second field is null, the entire string appears as null. The documentation is unclear on the expected outcome, and contrary to how concat_ws operates.SELECT ...

Screen Shot 2022-03-14 at 4.00.53 PM
  • 6446 Views
  • 3 replies
  • 7 kudos
Latest Reply
Kaniz
Community Manager
  • 7 kudos

Hi @Steve Helms​ , Would you like to share with us whether you got your answer, or else do you require more help? Would you like to mark the best answer in case your problem is resolved?

  • 7 kudos
2 More Replies
gianni77
by New Contributor
  • 37090 Views
  • 12 replies
  • 4 kudos

How can I export a result of a SQL query from a databricks notebook?

The "Download CSV" button in the notebook seems to work only for results <=1000 entries. How can I export larger result-sets as CSV?

  • 37090 Views
  • 12 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Now within the Databricks SQL interface (within the SQL editor) you can actually download the full results as a csv. Just make sure to uncheck "LIMIT 1000" and then click the download button under "..." in the bottom left:

  • 4 kudos
11 More Replies
Anonymous
by Not applicable
  • 675 Views
  • 1 replies
  • 0 kudos

How to resolve Quickbooks error 12007

QuickBooks error 12007 occurs when an update time out occurs. QuickBooks may encounter this error when it cannot connect to the internet if it's unable to access the server. If you want to know its solutions then check out our latest blog on this.

  • 675 Views
  • 1 replies
  • 0 kudos
Latest Reply
willjoe
New Contributor III
  • 0 kudos

How to Resolve QuickBooks Payroll Update Error 12007?For various possible causes of the QB payroll update error 12007, you need to perform different troubleshooting procedures. Follow the solutions in their given sequence to fix this QuickBooks error...

  • 0 kudos
Henry
by New Contributor II
  • 1642 Views
  • 5 replies
  • 0 kudos

Resolved! Cannot login Databricks Community Edition with new account

It seems it is not allowing me to log into databricks community edition. I have recently created a new account and had the account verified. However, whenever I try to log in, I am redirected to the same page without throwing any errors. When I do en...

  • 1642 Views
  • 5 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Henry Xiang​ , Are you good with your account or do you need any help?

  • 0 kudos
4 More Replies
Sudeshna
by New Contributor III
  • 1107 Views
  • 2 replies
  • 3 kudos

How can i pass one of the values from one function to another as an argument in Databricks SQL?

For eg - CREATE OR REPLACE TABLE table2(a INT, b INT);INSERT INTO table2 VALUES (100, 200);CREATE OR REPLACE FUNCTION func1() RETURNS TABLE(a INT, b INT) RETURN (SELECT a+b, a*b from table2);create or replace function calc(p DOUBLE) RETURNS TABLE(val...

image
  • 1107 Views
  • 2 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

Yes, it is possible, but with different logic. For scalar, so calc(a) in select calc(a) from func1(); it can only be a query as a table for a scalar is not allowed. So please try something like:CREATE OR REPLACE FUNCTION func_table() RETURNS TABLE(a ...

  • 3 kudos
1 More Replies
shan_chandra
by Honored Contributor III
  • 8878 Views
  • 2 replies
  • 0 kudos
  • 8878 Views
  • 2 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

%scala def clearAllCaching(tableName: Option[String] = None): Unit = { tableName.map { path => com.databricks.sql.transaction.tahoe.DeltaValidation.invalidateCache(spark, path) } spark.conf.set("com.databricks.sql.io.caching.bucketedRead.enabled", "f...

  • 0 kudos
1 More Replies
Pragan
by New Contributor
  • 2013 Views
  • 5 replies
  • 1 kudos

Resolved! Cluster doesn't support Photon with Docker Image enabled

I enabled Photon 9.1 LTS DBR in cluster that was already using Docker Image of the latest version, when I ran a SQL QUery using my cluster, I could not see any Photon engine working in my executor that should be actually running in Photon Engine.When...

  • 2013 Views
  • 5 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hello @Praganessh S​ , Photon is currently in Public Preview. The only way to use it is to explicitly run Databricks-provide Runtime images which contain it. Please see: https://docs.databricks.com/runtime/photon.html#databricks-clustersandhttps://do...

  • 1 kudos
4 More Replies
databrick_comm
by New Contributor II
  • 2889 Views
  • 5 replies
  • 0 kudos

Resolved! Not able to connecting Denodo VDP from databricks

I would like connect Denodo VDP from databrick workspace installed ODBC client and Installed denodo Jar in cluster ,not able to understanding other steps.Could you please me

  • 2889 Views
  • 5 replies
  • 0 kudos
Latest Reply
User16753724663
Valued Contributor
  • 0 kudos

Hi @sathyanarayan kokku​ Are you trying to install denodo vdp server in databricks?

  • 0 kudos
4 More Replies
SimonY
by New Contributor III
  • 1502 Views
  • 3 replies
  • 3 kudos

Resolved! Trigger.AvailableNow does not support maxOffsetsPerTrigger in Databricks runtime 10.3

Hello,I ran a spark stream job to ingest data from kafka to test Trigger.AvailableNow.What's environment the job run ?1: Databricks runtime 10.32: Azure cloud3: 1 Driver node + 3 work nodes( 14GB, 4core)val maxOffsetsPerTrigger = "500"spark.conf.set...

  • 1502 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

You'd be better off with 1 node with 12 cores than 3 nodes with 4 each. You're shuffles are going to be much better one 1 machine.

  • 3 kudos
2 More Replies
fermin_vicente
by New Contributor III
  • 2795 Views
  • 7 replies
  • 4 kudos

Resolved! Can secrets be retrieved only for the scope of an init script?

Hi there, if I set any secret in an env var to be used by a cluster-scoped init script, it remains available for the users attaching any notebook to the cluster and easily extracted with a print.There's some hint in the documentation about the secret...

  • 2795 Views
  • 7 replies
  • 4 kudos
Latest Reply
pavan_kumar
Contributor
  • 4 kudos

@Fermin Vicente​ good to know that this approach is working well. but please make sure that you use this approach at the end of your init script only

  • 4 kudos
6 More Replies
DarshilDesai
by New Contributor II
  • 8991 Views
  • 3 replies
  • 3 kudos

Resolved! How to Efficiently Read Nested JSON in PySpark?

I am having trouble efficiently reading & parsing in a large number of stream files in Pyspark! Context Here is the schema of the stream file that I am reading in JSON. Blank spaces are edits for confidentiality purposes. root |-- location_info: ar...

  • 8991 Views
  • 3 replies
  • 3 kudos
Latest Reply
Kaniz
Community Manager
  • 3 kudos

Hi @Darshil Desai​ , How are you? Were you able to resolve your problem?

  • 3 kudos
2 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 485 Views
  • 1 replies
  • 19 kudos

Runtime 10.4 is available and is LTS. From today it is not beta anymore and it is LTS! mean Long Time Support. So for sure it will be with us for next...

Runtime 10.4 is available and is LTS.From today it is not beta anymore and it is LTS! mean Long Time Support. So for sure it will be with us for next 2 years.10.4 includes some awesome features like:Auto Compaction rollbacks are now enabled by defaul...

  • 485 Views
  • 1 replies
  • 19 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 19 kudos

I have the same favorite.I am curious how it works under the hood. zipWithIndex?

  • 19 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 7466 Views
  • 29 replies
  • 40 kudos

Resolved! SparkFiles - strange behavior on Azure databricks (runtime 10)

When you use:from pyspark import SparkFiles spark.sparkContext.addFile(url)it adds file to NON dbfs /local_disk0/ but then when you want to read file:spark.read.json(SparkFiles.get("file_name"))it wants to read it from /dbfs/local_disk0/. I tried als...

  • 7466 Views
  • 29 replies
  • 40 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 40 kudos

I confirm that as @Arvind Ravish​ said adding file:/// is solving the problem.

  • 40 kudos
28 More Replies
Gvsmao
by New Contributor III
  • 2222 Views
  • 8 replies
  • 3 kudos

Resolved! SQL Databricks - Spot VMs (Cost Optimized)

Hello! I want to ask a question please!Referring to Spot VMs with the "Cost Optimized" setting:In the case of Endpoint X-Small, which are 2 workers, if I send 10 simultaneous queries and a worker is evicted, can I have an error in any of these querie...

image
  • 2222 Views
  • 8 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Thanks for the information, I will try to figure it out for more. Keep sharing such informative post.  www.mygroundbiz.com

  • 3 kudos
7 More Replies
Labels
Top Kudoed Authors