I have been getting this error sporadically. I'm loading a dataset and training a model using the dataset in notebook. Sometimes it works and sometimes it doesn't. I have seen similar posts and tried all solutions mentioned, log output size limit, spark.network.timeout configurations, creating a temporary view. Nothing fundamentally solved the issue. Sometimes it would work without any issues, and sometimes I would get the error above. But I'm pretty sure there is no memory issues and I have allocated enough cluster memory. Could you please shed some light on what is causing this issue? Especially I don't understand why it only breaks some time but not always. So really hard to pinpoint the issue. Thank you!