cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;

sarvesh
Contributor III

I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,

spark.conf.set("spark.executor.memory", "8g")

but i got the following stack :

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;

at org.apache.spark.sql.RuntimeConfig.requireNonStaticConf(RuntimeConfig.scala:158)

at org.apache.spark.sql.RuntimeConfig.set(RuntimeConfig.scala:42)

at com.sundogsoftware.spark.spaceTrim.trimmer$.delayedEndpoint$com$sundogsoftware$spark$spaceTrim$trimmer$1(trimmer.scala:29)

at com.sundogsoftware.spark.spaceTrim.trimmer$delayedInit$body.apply(trimmer.scala:9)

at scala.Function0.apply$mcV$sp(Function0.scala:39)

at scala.Function0.apply$mcV$sp$(Function0.scala:39)

at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)

at scala.App.$anonfun$main$1$adapted(App.scala:80)

at scala.collection.immutable.List.foreach(List.scala:431)

at scala.App.main(App.scala:80)

at scala.App.main$(App.scala:78)

at com.sundogsoftware.spark.spaceTrim.trimmer$.main(trimmer.scala:9)

at com.sundogsoftware.spark.spaceTrim.trimmer.main(trimmer.scala)

my code :-

val spark = SparkSession

.builder

.appName("schemaTest")

.master("local[*]")

.getOrCreate()

spark.conf.set("spark.executor.memory", "8g")

val df = spark.read

.format("com.crealytics.spark.excel").

option("header", "true").

option("inferSchema", "false").

option("treatEmptyValuesAsNulls", "false").

option("addColorColumns", "False").

load("data/12file.xlsx")

1 ACCEPTED SOLUTION

Accepted Solutions

Prabakar
Databricks Employee
Databricks Employee

On the cluster configuration page, go to the advanced options. Click it to expand the field. There you will find the Spark tab and you can set the values there in the "Spark config".

image

View solution in original post

3 REPLIES 3

Prabakar
Databricks Employee
Databricks Employee

Hi @sarvesh singhโ€‹ Please try setting the value in the cluster spark config tab. It should help.

Prabakar
Databricks Employee
Databricks Employee

On the cluster configuration page, go to the advanced options. Click it to expand the field. There you will find the Spark tab and you can set the values there in the "Spark config".

image

Thank you for replying but, for this project i am using intellij and working locally, is there some way with spark session or context to do the same?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group