โ06-22-2022 08:50 AM
I am working on writing a large amount of data from Databricks to an external SQL server using a JDB connection. I keep getting timeout errors/connection lost but digging deeper it appears to be a memory problem. I am wondering what cluster configurations I may need/where would be best to cache my data. The input data is about about 60 gb of data that is reduced to 60 mil rows. The process works to write about 1 million rows to the external database then crashes.
I have tried different cluster configurations, memory optimized, compute optimized etc. I have also tried different garbage collection settings as the garbage collection metric is dark red during the process.
โ06-22-2022 09:37 AM
Please extend the number of dataframe partitions using
coalesce(<N>) or repartition(<N>). In most cases, it should save the issue automatically as it will write in chunks per partition.
In addition these jdbc connection properties can help (as on JDBC To Other Databases - Spark 3.3.0 Documentation (apache.org)๐
numPartitions
batchsize
isolationLevel
โ06-22-2022 09:37 AM
Please extend the number of dataframe partitions using
coalesce(<N>) or repartition(<N>). In most cases, it should save the issue automatically as it will write in chunks per partition.
In addition these jdbc connection properties can help (as on JDBC To Other Databases - Spark 3.3.0 Documentation (apache.org)๐
numPartitions
batchsize
isolationLevel
โ06-22-2022 03:41 PM
Thanks for your response, Hubert! That seemed to work to fix the timeout issue.
โ06-23-2022 11:46 AM
Great to hear. If it is possible, please select my answer as the best one.
โ08-14-2024 02:14 PM
Excuse me Megan05, what parameters did you use?
โ06-23-2022 02:42 AM
Thanks for the answer. I am also get in this problem.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group