cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

swzzzsw
by New Contributor III
  • 2489 Views
  • 4 replies
  • 0 kudos

Resolved! SQLServerException: deadlock

I'm using databricks to connect to a SQL managed instance via JDBC. SQL operations I need to perform include DELETE, UPDATE, and simple read and write. Since spark syntax only handles simple read and write, I had to open SQL connection using Scala an...

image.png
  • 2489 Views
  • 4 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

this is not a spark error but purely the database.There are tons of articles online on how to prevent deadlocks, but there is no single solution for this.

  • 0 kudos
3 More Replies
Alexander1
by New Contributor III
  • 1658 Views
  • 5 replies
  • 0 kudos

Databricks JDBC 2.6.19 documentation

I am searching for the Databricks JDBC 2.6.19 documentation page. I can find release notes from the Databricks download page (https://databricks-bi-artifacts.s3.us-east-2.amazonaws.com/simbaspark-drivers/jdbc/2.6.19/docs/release-notes.txt) but on Mag...

  • 1658 Views
  • 5 replies
  • 0 kudos
Latest Reply
Alexander1
New Contributor III
  • 0 kudos

By the way what is still wild, is that the Simba docs say 2.6.16 does only support until Spark 2.4 while the release notes on Databricks download page say 2.6.16 already supports Spark 3.0. Strange that we get contradicting info from the actual driv...

  • 0 kudos
4 More Replies
HowardWong
by New Contributor II
  • 769 Views
  • 1 replies
  • 0 kudos

How many users can the JDBC endpoint support in the All Purpose HC?

What is the max number of users can the JDBC endpoint support in the All Purpose high concurrency cluster? To support more sql workloads, is it better to go with Databricks Sql Enpoints?

  • 769 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Honored Contributor III
  • 0 kudos

There is a limit an execution context limit of 145. This means you can have at most 145 notebooks attached to a cluster. https://kb.databricks.com/execution/maximum-execution-context.htmlIf you are primarily using SQL then Databricks SQL Endpoints wo...

  • 0 kudos
bhaumikg
by New Contributor II
  • 12388 Views
  • 7 replies
  • 2 kudos

Databricks throwing error "SQL DW failed to execute the JDBC query produced by the connector." while pushing the column with string length more than 255

I am using databricks to transform the data and than pushing the data into datalake. the data is getting pushed in if the length of the string field is 255 or less but it throws following error if it is beyond that. "SQL DW failed to execute the JDB...

  • 12388 Views
  • 7 replies
  • 2 kudos
Latest Reply
bhaumikg
New Contributor II
  • 2 kudos

As suggested by ZAIvR, please use append and provide maxlength while pushing the data. Overwrite may not work with this unless databricks team has fixed the issue

  • 2 kudos
6 More Replies
Tamara
by New Contributor III
  • 8823 Views
  • 8 replies
  • 1 kudos

Resolved! Can I connect to a MS SQL server table in Databricks account?

I'd like to access a table on a MS SQL Server (Microsoft). Is it possible from Databricks? To my understanding, the syntax is something like this (in a SQL Notebook): CREATE TEMPORARY TABLE jdbcTable USING org.apache.spark.sql.jdbc OPTIONS ( url...

  • 8823 Views
  • 8 replies
  • 1 kudos
Latest Reply
JohnSmith091
New Contributor II
  • 1 kudos

Thanks for the trick that you have shared with us. I am really amazed to use this informational post. If you are facing MacBook error like MacBook Pro won't turn on black screen then click the link.

  • 1 kudos
7 More Replies
cfregly
by Contributor
  • 4223 Views
  • 4 replies
  • 0 kudos
  • 4223 Views
  • 4 replies
  • 0 kudos
Latest Reply
TianziCai
New Contributor II
  • 0 kudos

sample = (spark.read .format("com.databricks.spark.redshift") .option("url", jdbcUrl) .option("dbtable", "xx.xxx") # schema, table .option("forward_spark_s3_credentials", True) .option("tempdir", tem...

  • 0 kudos
3 More Replies
longcao
by New Contributor III
  • 9143 Views
  • 5 replies
  • 0 kudos

Resolved! Writing DataFrame to PostgreSQL via JDBC extremely slow (Spark 1.6.1)

Hi there,I'm just getting started with Spark and I've got a moderately sized DataFrame created from collating CSVs in S3 (88 columns, 860k rows) that seems to be taking an unreasonable amount of time to insert (using SaveMode.Append) into Postgres. I...

  • 9143 Views
  • 5 replies
  • 0 kudos
Latest Reply
longcao
New Contributor III
  • 0 kudos

In case anyone was curious how I worked around this, I ended up dropping down to Postgres JDBC and using CopyManager to COPY rows in directly from Spark: https://gist.github.com/longcao/bb61f1798ccbbfa4a0d7b76e49982f84

  • 0 kudos
4 More Replies
Labels