06-22-2022 08:50 AM
I am working on writing a large amount of data from Databricks to an external SQL server using a JDB connection. I keep getting timeout errors/connection lost but digging deeper it appears to be a memory problem. I am wondering what cluster configurations I may need/where would be best to cache my data. The input data is about about 60 gb of data that is reduced to 60 mil rows. The process works to write about 1 million rows to the external database then crashes.
I have tried different cluster configurations, memory optimized, compute optimized etc. I have also tried different garbage collection settings as the garbage collection metric is dark red during the process.
06-22-2022 09:37 AM
Please extend the number of dataframe partitions using
coalesce(<N>) or repartition(<N>). In most cases, it should save the issue automatically as it will write in chunks per partition.
In addition these jdbc connection properties can help (as on JDBC To Other Databases - Spark 3.3.0 Documentation (apache.org)😞
numPartitions
batchsize
isolationLevel
06-22-2022 09:37 AM
Please extend the number of dataframe partitions using
coalesce(<N>) or repartition(<N>). In most cases, it should save the issue automatically as it will write in chunks per partition.
In addition these jdbc connection properties can help (as on JDBC To Other Databases - Spark 3.3.0 Documentation (apache.org)😞
numPartitions
batchsize
isolationLevel
06-22-2022 03:41 PM
Thanks for your response, Hubert! That seemed to work to fix the timeout issue.
06-23-2022 11:46 AM
Great to hear. If it is possible, please select my answer as the best one.
06-23-2022 02:42 AM
Thanks for the answer. I am also get in this problem.
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!
Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.