08-23-2021 07:48 AM
set spark.conf.set("spark.driver.maxResultSize", "20g")
get spark.conf.get("spark.driver.maxResultSize") // 20g which is expected in notebook , I did not do in cluster level setting
still getting 4g while executing the spark job , why?
because of this job is getting failed.
09-16-2021 10:24 AM
Hi @sachinmkp1@gmail.com ,
You need to add this Spark configuration at your cluster level, not at the notebook level. When you add it to the cluster level it will apply the settings properly. For more details on this issue, please check our knowledge base article https://kb.databricks.com/jobs/job-fails-maxresultsize-exception.html
Thank you.
08-23-2021 07:51 AM
question is- when I go to set spark.driver.maxResultSize = 20g in notebook only , it is not taking while executing the job even when I try to get the spark.driver.maxResultSize value in notebook I am getting 20g.
Still need clarification why does it behave like this?
09-16-2021 10:24 AM
Hi @sachinmkp1@gmail.com ,
You need to add this Spark configuration at your cluster level, not at the notebook level. When you add it to the cluster level it will apply the settings properly. For more details on this issue, please check our knowledge base article https://kb.databricks.com/jobs/job-fails-maxresultsize-exception.html
Thank you.
04-04-2022 01:15 AM
Hi @sachinmkp1@gmail.com , Does @Jose Gonzalez 's reply answer your question?
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!
Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.