I still have the issue, but I noticed I do not have it on Linux. Support told me to use this line:spark.conf.set("spark.sql.session.localRelationCacheThreshold", 64 * 1024 * 1024)With that, it worked on windows, also this gives a hint what should be ...
Hi @Retired_mod, this a really bad support experience. Is this how Databricks support manages issues? I am currently thinking about using a different solution, this is an outage for several days now.
I got information from our Databricks manager, that this is a known issue that they are working on, although this takes a lot of time, for us it's a huge problem for going on production with this!