trobin
New Contributor II

Is this fix confirmed for other runtimes? I am having the same issue on 13.3 LTS.

Unfortunately I don't have this information. I only raised it for 14.3 LTS since my databricks connect version is the same (14.3.1).

ADuma
New Contributor III

The error is now occuring for Cluster with Version 15.4. for me. Did the fix get released yet?

I don't think it has been released yet, and now I’m facing this issue on both 14.3 LTS and 15.4 LTS :(.

FYI @Retired_mod 

asia_sowa
New Contributor II

I have the same issue with 13.3 LTS version

LukeEs
New Contributor II

I was informed that the fix was released. According to our tests, however, it did not fix anything. Instead, 15.4. LTS is now broken, too. This topic is getting urgent for us, now.

felix_
New Contributor II

Any updates on this? Facing the same issue with 15.4 LTS now as well..

ahsan_aj
Contributor II

Microsoft support just mentioned that fix has been deployed by Databricks, but the issue continues to persist for me on both 14.3 LTS and 15.4 LTS.

CarlDaniel
New Contributor II

Now same issue with version 15.4 LTS. Does the fix for 14.3 LTS work? Thanks!

MichalMazurek
New Contributor III

I still have the issue, but I noticed I do not have it on Linux. Support told me to use this line:

spark.conf.set("spark.sql.session.localRelationCacheThreshold", 64 * 1024 * 1024)

With that, it worked on windows, also this gives a hint what should be the batch size.

ahsan_aj
Contributor II

I have a troubleshooting session scheduled with Databricks today regarding this issue and will keep everyone updated on the progress.

ahsan_aj
Contributor II

As a workaround, please try the following Spark configuration, which seems to have resolved the issue for me on both 14.3 LTS and 15.4 LTS.

spark.conf.set("spark.sql.session.localRelationCacheThreshold", 64 * 1024 * 1024)

View solution in original post

Databricks confirmed the same workaround while they work on a permanent fix.