set spark.conf.set("spark.driver.maxResultSize", "20g")
get spark.conf.get("spark.driver.maxResultSize") // 20g which is expected in notebook , I did not do in cluster level setting
still getting 4g while executing the spark job , why?
because of this job is getting failed.