- 12088 Views
- 5 replies
- 5 kudos
Connecting to Databricks using OpenJDK 17 I got the exception below. Are there any plans to fix the driver for OpenJDK17?java.sql.SQLException: [Databricks][DatabricksJDBCDriver](500540) Error caught in BackgroundFetcher. Foreground thread ID: 44. Ba...
- 12088 Views
- 5 replies
- 5 kudos
Latest Reply
I still see the above error with databricks jdbc driver 2.6.33. Anyone aware of fix available either in driver or java?
4 More Replies
- 7754 Views
- 6 replies
- 5 kudos
Hi,I try to use java sql. i can see that the query on databricks is executed properly.However, on my client i get exception (see below).versions:jdk: jdk-20.0.1 (tryed also with version 16, same results)https://www.oracle.com/il-en/java/technologies/...
- 7754 Views
- 6 replies
- 5 kudos
by
JK2021
• New Contributor III
- 4368 Views
- 6 replies
- 3 kudos
We are planning to customise code on Databricks to call Salesforce bulk API 2.0 to load data from databricks delta table to Salesforce.My question is : All the exception handling, retries and all around Bulk API can be coded explicitly in Data bricks...
- 4368 Views
- 6 replies
- 3 kudos
by
GS2312
• New Contributor II
- 5692 Views
- 6 replies
- 5 kudos
Hi There,I have been trying to create an external table on Azure Databricks with below statement.df.write.partitionBy("year", "month", "day").format('org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat').option("path",sourcepath).mod...
- 5692 Views
- 6 replies
- 5 kudos
Latest Reply
Hi @Gaurishankar Sakhare Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best ...
5 More Replies
- 2792 Views
- 2 replies
- 1 kudos
Using ODBC or JDBC to read from a table fails when I attempt to use an ORDER BY clause. In one sample case, I have a fairly small table (just 1946 rows).select *
from some_table
order by some_fieldResult:java.lang.IllegalArgumentException: requiremen...
- 2792 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @petter@hightouch.com Petter Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it doe...
1 More Replies
- 3314 Views
- 2 replies
- 4 kudos
I'm facing an issue when I want to show a dataframe with JSON content.All this happens when the script runs in databricks-connect from VS Code.Basically, I would like any help or guidance to get this run as it should be. Thanks in advance.This is how...
- 3314 Views
- 2 replies
- 4 kudos
Latest Reply
The code works fine on databricks cluster, but this code is part of a unit test in local env. then submitted to a branch->PR->merged into master branch.Thanks for the advice on using DBX. I will give DBX a try again even though I've already tried.I'l...
1 More Replies
- 5805 Views
- 6 replies
- 2 kudos
We have a Denodo big data platform hosted on Databricks. Recently we have been facing the exception with message '[Simba][SparkJDBCDriver](500550)' with the Databricks which interrupts the Databricks connection after the certain time Interval usuall...
- 5805 Views
- 6 replies
- 2 kudos
Latest Reply
Hi All,We are also experiencing the same behavior:[Simba][SimbaSparkJDBCDriver] (500550) The next rowset buffer is already marked as consumed. The fetch thread might have terminated unexpectedly. Foreground thread ID: xxxx. Background thread ID: yyyy...
5 More Replies
- 14400 Views
- 1 replies
- 1 kudos
ClientAuthenticationError: DefaultAzureCredential failed to retrieve a token from the included credentials.Attempted credentials:EnvironmentCredential: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.V...
- 14400 Views
- 1 replies
- 1 kudos
Latest Reply
Below docs are for reference:https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/identity/azure-identity/migration_guide.mdthere was a suggestion given to usefrom azure.common.credentials import ServicePrincipalCredentialsinstead offrom azure...
- 10982 Views
- 1 replies
- 0 kudos
I'm trying to write python notebook code that can be run from databricks web ui or from airflow. I intend to pass parameters from airflow via the job api using notebook_params. From what I understand, these are accessible as widget values.
dbutils....
- 10982 Views
- 1 replies
- 0 kudos
Latest Reply
I handle this exception with something like:import py4j
try:
value = dbutils.widgets.get("parameter")
except py4j.protocol.Py4JJavaError as e:
print(e)If you look more closely at the stack trace you'll see the origin of the message is from some...
- 1144 Views
- 0 replies
- 0 kudos
I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,spark.conf.set("spark.executor.memory", "8g")but i got the following stack :Using Spark's default l...
- 1144 Views
- 0 replies
- 0 kudos
- 5364 Views
- 8 replies
- 4 kudos
Truncate False not working in Delta table. df_delta.show(df_delta.count(),False)Computer size Single Node - Standard_F4S - 8GB Memory, 4 coresHow much max data we can persist in Delta table in Parquet file and How fast we can retrieve data.
- 5364 Views
- 8 replies
- 4 kudos
- 4409 Views
- 3 replies
- 4 kudos
I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,spark.conf.set("spark.executor.memory", "8g")but i got the following stack :Using Spark's default l...
- 4409 Views
- 3 replies
- 4 kudos
Latest Reply
On the cluster configuration page, go to the advanced options. Click it to expand the field. There you will find the Spark tab and you can set the values there in the "Spark config".
2 More Replies
- 3443 Views
- 1 replies
- 0 kudos
The below code executes a 'get' api method to retrieve objects from s3 and write to the data lake.The problem arises when I use dbutils.secrets.get to get the keys required to establish the connection to s3my_dataframe.rdd.foreachPartition(partition ...
- 3443 Views
- 1 replies
- 0 kudos
Latest Reply
Howdy @Sandesh Puligundla - Thank you for your question. Thank you for your patience. I'd like to give this a bit longer to see how the community responds. Hang tight!