I am working on writing a large amount of data from Databricks to an external SQL server using a JDB connection. I keep getting timeout errors/connection lost but digging deeper it appears to be a memory problem. I am wondering what cluster configurations I may need/where would be best to cache my data. The input data is about about 60 gb of data that is reduced to 60 mil rows. The process works to write about 1 million rows to the external database then crashes.
I have tried different cluster configurations, memory optimized, compute optimized etc. I have also tried different garbage collection settings as the garbage collection metric is dark red during the process.