- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-23-2021 07:48 AM
set spark.conf.set("spark.driver.maxResultSize", "20g")
get spark.conf.get("spark.driver.maxResultSize") // 20g which is expected in notebook , I did not do in cluster level setting
still getting 4g while executing the spark job , why?
because of this job is getting failed.
- Labels:
-
Spark config
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2021 10:24 AM
Hi @sachinmkp1@gmail.com ,
You need to add this Spark configuration at your cluster level, not at the notebook level. When you add it to the cluster level it will apply the settings properly. For more details on this issue, please check our knowledge base article https://kb.databricks.com/jobs/job-fails-maxresultsize-exception.html
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-23-2021 07:51 AM
question is- when I go to set spark.driver.maxResultSize = 20g in notebook only , it is not taking while executing the job even when I try to get the spark.driver.maxResultSize value in notebook I am getting 20g.
Still need clarification why does it behave like this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2021 10:24 AM
Hi @sachinmkp1@gmail.com ,
You need to add this Spark configuration at your cluster level, not at the notebook level. When you add it to the cluster level it will apply the settings properly. For more details on this issue, please check our knowledge base article https://kb.databricks.com/jobs/job-fails-maxresultsize-exception.html
Thank you.

