cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

amitca71
by Contributor II
  • 5809 Views
  • 5 replies
  • 4 kudos

Resolved! exception when using java SQL client

Hi,I try to use java sql. i can see that the query on databricks is executed properly.However, on my client i get exception (see below).versions:jdk: jdk-20.0.1 (tryed also with version 16, same results)https://www.oracle.com/il-en/java/technologies/...

  • 5809 Views
  • 5 replies
  • 4 kudos
Latest Reply
xebia
New Contributor II
  • 4 kudos

I am using java 17 and getting the same error.

  • 4 kudos
4 More Replies
dprutean
by New Contributor III
  • 9381 Views
  • 5 replies
  • 4 kudos

Resolved! JDBC Driver support for OpenJDK 17

Connecting to Databricks using OpenJDK 17 I got the exception below. Are there any plans to fix the driver for OpenJDK17?java.sql.SQLException: [Databricks][DatabricksJDBCDriver](500540) Error caught in BackgroundFetcher. Foreground thread ID: 44. Ba...

  • 9381 Views
  • 5 replies
  • 4 kudos
Latest Reply
ameyabapat
New Contributor II
  • 4 kudos

I still see the above error with databricks jdbc driver 2.6.33. Anyone aware of fix available either in  driver or java? 

  • 4 kudos
4 More Replies
GS2312
by New Contributor II
  • 4555 Views
  • 6 replies
  • 5 kudos

KeyProviderException when trying to create external table on databricks

Hi There,I have been trying to create an external table on Azure Databricks with below statement.df.write.partitionBy("year", "month", "day").format('org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat').option("path",sourcepath).mod...

  • 4555 Views
  • 6 replies
  • 5 kudos
Latest Reply
Anonymous
Not applicable
  • 5 kudos

Hi @Gaurishankar Sakhare​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best ...

  • 5 kudos
5 More Replies
haggholm
by New Contributor
  • 2209 Views
  • 2 replies
  • 1 kudos

Resolved! Query with ORDER BY fails with HiveThriftServerError "requirement failed: Subquery … has not finished"

Using ODBC or JDBC to read from a table fails when I attempt to use an ORDER BY clause. In one sample case, I have a fairly small table (just 1946 rows).select * from some_table order by some_fieldResult:java.lang.IllegalArgumentException: requiremen...

  • 2209 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @petter@hightouch.com Petter​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it doe...

  • 1 kudos
1 More Replies
KarimSegura
by New Contributor III
  • 2913 Views
  • 3 replies
  • 4 kudos

databricks-connect throws an exception when showing a dataframe with json content

I'm facing an issue when I want to show a dataframe with JSON content.All this happens when the script runs in databricks-connect from VS Code.Basically, I would like any help or guidance to get this run as it should be. Thanks in advance.This is how...

  • 2913 Views
  • 3 replies
  • 4 kudos
Latest Reply
KarimSegura
New Contributor III
  • 4 kudos

The code works fine on databricks cluster, but this code is part of a unit test in local env. then submitted to a branch->PR->merged into master branch.Thanks for the advice on using DBX. I will give DBX a try again even though I've already tried.I'l...

  • 4 kudos
2 More Replies
GKKarthi
by New Contributor
  • 4617 Views
  • 7 replies
  • 2 kudos

Resolved! Databricks - Simba SparkJDBCDriver 500550 exception

We have a Denodo big data platform hosted on Databricks. Recently we have been facing the exception with message '[Simba][SparkJDBCDriver](500550)'  with the Databricks which interrupts the Databricks connection after the certain time Interval usuall...

  • 4617 Views
  • 7 replies
  • 2 kudos
Latest Reply
PFBOLIVEIRA
New Contributor II
  • 2 kudos

Hi All,We are also experiencing the same behavior:[Simba][SimbaSparkJDBCDriver] (500550) The next rowset buffer is already marked as consumed. The fetch thread might have terminated unexpectedly. Foreground thread ID: xxxx. Background thread ID: yyyy...

  • 2 kudos
6 More Replies
Mohit_m
by Valued Contributor II
  • 13170 Views
  • 1 replies
  • 1 kudos

Resolved! Job is failing with exception ClientAuthenticationError: DefaultAzureCredential failed to retrieve a token from the included credentials.

ClientAuthenticationError: DefaultAzureCredential failed to retrieve a token from the included credentials.Attempted credentials:EnvironmentCredential: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.V...

  • 13170 Views
  • 1 replies
  • 1 kudos
Latest Reply
Mohit_m
Valued Contributor II
  • 1 kudos

Below docs are for reference:https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/identity/azure-identity/migration_guide.mdthere was a suggestion given to usefrom azure.common.credentials import ServicePrincipalCredentialsinstead offrom azure...

  • 1 kudos
entimaniac
by New Contributor
  • 9393 Views
  • 1 replies
  • 0 kudos

How to catch exception from dbutils.widgets.get(...)

I'm trying to write python notebook code that can be run from databricks web ui or from airflow. I intend to pass parameters from airflow via the job api using notebook_params. From what I understand, these are accessible as widget values. dbutils....

  • 9393 Views
  • 1 replies
  • 0 kudos
Latest Reply
blt
New Contributor II
  • 0 kudos

I handle this exception with something like:import py4j try: value = dbutils.widgets.get("parameter") except py4j.protocol.Py4JJavaError as e: print(e)If you look more closely at the stack trace you'll see the origin of the message is from some...

  • 0 kudos
sarvesh
by Contributor III
  • 964 Views
  • 0 replies
  • 0 kudos

Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;

I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,spark.conf.set("spark.executor.memory", "8g")but i got the following stack :Using Spark's default l...

  • 964 Views
  • 0 replies
  • 0 kudos
AzureDatabricks
by New Contributor III
  • 4412 Views
  • 8 replies
  • 4 kudos

Resolved! Need to see all the records in DeltaTable. Exception - java.lang.OutOfMemoryError: GC overhead limit exceeded

Truncate False not working in Delta table.  df_delta.show(df_delta.count(),False)Computer size Single Node - Standard_F4S - 8GB Memory, 4 coresHow much max data we can persist in Delta table in Parquet file and How fast we can retrieve data.

  • 4412 Views
  • 8 replies
  • 4 kudos
Latest Reply
AzureDatabricks
New Contributor III
  • 4 kudos

thank you !!!

  • 4 kudos
7 More Replies
sarvesh
by Contributor III
  • 3507 Views
  • 3 replies
  • 4 kudos

Resolved! Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;

I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,spark.conf.set("spark.executor.memory", "8g")but i got the following stack :Using Spark's default l...

  • 3507 Views
  • 3 replies
  • 4 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 4 kudos

On the cluster configuration page, go to the advanced options. Click it to expand the field. There you will find the Spark tab and you can set the values there in the "Spark config".

  • 4 kudos
2 More Replies
JK2021
by New Contributor III
  • 3475 Views
  • 5 replies
  • 3 kudos

Resolved! Exception handling in Databricks

We are planning to customise code on Databricks to call Salesforce bulk API 2.0 to load data from databricks delta table to Salesforce.My question is : All the exception handling, retries and all around Bulk API can be coded explicitly in Data bricks...

  • 3475 Views
  • 5 replies
  • 3 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 3 kudos

Hi @Jazmine Kochan​ , I haven't tried Salesforce bulk API 2.0 to load data. But in theory, it should be fine.

  • 3 kudos
4 More Replies
Sandesh87
by New Contributor III
  • 2843 Views
  • 2 replies
  • 2 kudos

Resolved! dbutils.secrets.get- NoSuchElementException: None.get

The below code executes a 'get' api method to retrieve objects from s3 and write to the data lake.The problem arises when I use dbutils.secrets.get to get the keys required to establish the connection to s3my_dataframe.rdd.foreachPartition(partition ...

  • 2843 Views
  • 2 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @Sandesh Puligundla​ , You just need to move the following two lines:val AccessKey = dbutils.secrets.get(scope = "ADB_Scope", key = "AccessKey-ID") val SecretKey = dbutils.secrets.get(scope = "ADB_Scope", key = "AccessKey-Secret")Outside of the fo...

  • 2 kudos
1 More Replies
Labels