<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Exception in thread &amp;quot;main&amp;quot; org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory; in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34622#M25359</link>
    <description>&lt;P&gt;I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;spark&lt;/I&gt;.&lt;I&gt;conf&lt;/I&gt;.set("spark.executor.memory", "8g")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;but i got the following stack :&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties&lt;/P&gt;&lt;P&gt;Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;&lt;/P&gt;&lt;P&gt;	at org.apache.spark.sql.RuntimeConfig.requireNonStaticConf(RuntimeConfig.scala:158)&lt;/P&gt;&lt;P&gt;	at org.apache.spark.sql.RuntimeConfig.set(RuntimeConfig.scala:42)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer$.delayedEndpoint$com$sundogsoftware$spark$spaceTrim$trimmer$1(trimmer.scala:29)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer$delayedInit$body.apply(trimmer.scala:9)&lt;/P&gt;&lt;P&gt;	at scala.Function0.apply$mcV$sp(Function0.scala:39)&lt;/P&gt;&lt;P&gt;	at scala.Function0.apply$mcV$sp$(Function0.scala:39)&lt;/P&gt;&lt;P&gt;	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)&lt;/P&gt;&lt;P&gt;	at scala.App.$anonfun$main$1$adapted(App.scala:80)&lt;/P&gt;&lt;P&gt;	at scala.collection.immutable.List.foreach(List.scala:431)&lt;/P&gt;&lt;P&gt;	at scala.App.main(App.scala:80)&lt;/P&gt;&lt;P&gt;	at scala.App.main$(App.scala:78)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer$.main(trimmer.scala:9)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer.main(trimmer.scala)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;my code :-&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;val &lt;I&gt;spark &lt;/I&gt;= SparkSession&lt;/P&gt;&lt;P&gt;    .&lt;I&gt;builder&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    &lt;/I&gt;.appName("schemaTest")&lt;/P&gt;&lt;P&gt;    .master("local[*]")&lt;/P&gt;&lt;P&gt;    .getOrCreate()&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;  &lt;I&gt;spark&lt;/I&gt;.&lt;I&gt;conf&lt;/I&gt;.set("spark.executor.memory", "8g")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;  val &lt;I&gt;df &lt;/I&gt;= &lt;I&gt;spark&lt;/I&gt;.read&lt;/P&gt;&lt;P&gt;    .format("com.crealytics.spark.excel").&lt;/P&gt;&lt;P&gt;    option("header", "true").&lt;/P&gt;&lt;P&gt;    option("inferSchema", "false").&lt;/P&gt;&lt;P&gt;    option("treatEmptyValuesAsNulls", "false").&lt;/P&gt;&lt;P&gt;    option("addColorColumns", "False").&lt;/P&gt;&lt;P&gt;    load("data/12file.xlsx")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 22 Nov 2021 08:58:47 GMT</pubDate>
    <dc:creator>sarvesh</dc:creator>
    <dc:date>2021-11-22T08:58:47Z</dc:date>
    <item>
      <title>Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;</title>
      <link>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34622#M25359</link>
      <description>&lt;P&gt;I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;spark&lt;/I&gt;.&lt;I&gt;conf&lt;/I&gt;.set("spark.executor.memory", "8g")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;but i got the following stack :&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties&lt;/P&gt;&lt;P&gt;Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;&lt;/P&gt;&lt;P&gt;	at org.apache.spark.sql.RuntimeConfig.requireNonStaticConf(RuntimeConfig.scala:158)&lt;/P&gt;&lt;P&gt;	at org.apache.spark.sql.RuntimeConfig.set(RuntimeConfig.scala:42)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer$.delayedEndpoint$com$sundogsoftware$spark$spaceTrim$trimmer$1(trimmer.scala:29)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer$delayedInit$body.apply(trimmer.scala:9)&lt;/P&gt;&lt;P&gt;	at scala.Function0.apply$mcV$sp(Function0.scala:39)&lt;/P&gt;&lt;P&gt;	at scala.Function0.apply$mcV$sp$(Function0.scala:39)&lt;/P&gt;&lt;P&gt;	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)&lt;/P&gt;&lt;P&gt;	at scala.App.$anonfun$main$1$adapted(App.scala:80)&lt;/P&gt;&lt;P&gt;	at scala.collection.immutable.List.foreach(List.scala:431)&lt;/P&gt;&lt;P&gt;	at scala.App.main(App.scala:80)&lt;/P&gt;&lt;P&gt;	at scala.App.main$(App.scala:78)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer$.main(trimmer.scala:9)&lt;/P&gt;&lt;P&gt;	at com.sundogsoftware.spark.spaceTrim.trimmer.main(trimmer.scala)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;my code :-&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;val &lt;I&gt;spark &lt;/I&gt;= SparkSession&lt;/P&gt;&lt;P&gt;    .&lt;I&gt;builder&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;    &lt;/I&gt;.appName("schemaTest")&lt;/P&gt;&lt;P&gt;    .master("local[*]")&lt;/P&gt;&lt;P&gt;    .getOrCreate()&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;  &lt;I&gt;spark&lt;/I&gt;.&lt;I&gt;conf&lt;/I&gt;.set("spark.executor.memory", "8g")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;  val &lt;I&gt;df &lt;/I&gt;= &lt;I&gt;spark&lt;/I&gt;.read&lt;/P&gt;&lt;P&gt;    .format("com.crealytics.spark.excel").&lt;/P&gt;&lt;P&gt;    option("header", "true").&lt;/P&gt;&lt;P&gt;    option("inferSchema", "false").&lt;/P&gt;&lt;P&gt;    option("treatEmptyValuesAsNulls", "false").&lt;/P&gt;&lt;P&gt;    option("addColorColumns", "False").&lt;/P&gt;&lt;P&gt;    load("data/12file.xlsx")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Nov 2021 08:58:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34622#M25359</guid>
      <dc:creator>sarvesh</dc:creator>
      <dc:date>2021-11-22T08:58:47Z</dc:date>
    </item>
    <item>
      <title>Re: Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;</title>
      <link>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34623#M25360</link>
      <description>&lt;P&gt;Hi @sarvesh singh​&amp;nbsp;Please try setting the value in the cluster spark config tab. It should help.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Nov 2021 09:16:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34623#M25360</guid>
      <dc:creator>Prabakar</dc:creator>
      <dc:date>2021-11-22T09:16:15Z</dc:date>
    </item>
    <item>
      <title>Re: Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;</title>
      <link>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34624#M25361</link>
      <description>&lt;P&gt;On the cluster configuration page, go to the advanced options. Click it to expand the field. There you will find the Spark tab and you can set the values there in the "Spark config".&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="image"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/2305iA4BCB2A0E5E3CD7E/image-size/large?v=v2&amp;amp;px=999" role="button" title="image" alt="image" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Nov 2021 09:24:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34624#M25361</guid>
      <dc:creator>Prabakar</dc:creator>
      <dc:date>2021-11-22T09:24:25Z</dc:date>
    </item>
    <item>
      <title>Re: Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;</title>
      <link>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34625#M25362</link>
      <description>&lt;P&gt;Thank you for replying but, for this project i am using intellij and working locally, is there some way with spark session or context to do the same?&lt;/P&gt;</description>
      <pubDate>Mon, 22 Nov 2021 09:50:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exception-in-thread-quot-main-quot-org-apache-spark-sql/m-p/34625#M25362</guid>
      <dc:creator>sarvesh</dc:creator>
      <dc:date>2021-11-22T09:50:38Z</dc:date>
    </item>
  </channel>
</rss>

