<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic SparkR session failed to initialize in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/sparkr-session-failed-to-initialize/m-p/17781#M11734</link>
    <description>&lt;P&gt;When run sparkR.session()&lt;/P&gt;&lt;P&gt;I faced below error:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;Spark package found in SPARK_HOME: /databricks/spark
&amp;nbsp;
Launching java with spark-submit command /databricks/spark/bin/spark-submit sparkr-shell /tmp/Rtmp5hnW8G/backend_porte9141208532d
&amp;nbsp;
Error: Could not find or load main class org.apache.spark.launcher.Main
&amp;nbsp;
/databricks/spark/bin/spark-class: line 101: CMD: bad array subscript
&amp;nbsp;
Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
&amp;nbsp;
JVM is not ready after 10 seconds&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;When I checked the cluster log4j , I found I hit the Rbackend limit:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;21/06/29 18:26:17 INFO RDriverLocal: 394. RDriverLocal.e9dee079-46f8-4108-b1ed-25fa02742efb: Exceeded maximum number of RBackends limit: 200&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 02 Jul 2021 16:11:29 GMT</pubDate>
    <dc:creator>User16752239289</dc:creator>
    <dc:date>2021-07-02T16:11:29Z</dc:date>
    <item>
      <title>SparkR session failed to initialize</title>
      <link>https://community.databricks.com/t5/data-engineering/sparkr-session-failed-to-initialize/m-p/17781#M11734</link>
      <description>&lt;P&gt;When run sparkR.session()&lt;/P&gt;&lt;P&gt;I faced below error:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;Spark package found in SPARK_HOME: /databricks/spark
&amp;nbsp;
Launching java with spark-submit command /databricks/spark/bin/spark-submit sparkr-shell /tmp/Rtmp5hnW8G/backend_porte9141208532d
&amp;nbsp;
Error: Could not find or load main class org.apache.spark.launcher.Main
&amp;nbsp;
/databricks/spark/bin/spark-class: line 101: CMD: bad array subscript
&amp;nbsp;
Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
&amp;nbsp;
JVM is not ready after 10 seconds&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;When I checked the cluster log4j , I found I hit the Rbackend limit:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;21/06/29 18:26:17 INFO RDriverLocal: 394. RDriverLocal.e9dee079-46f8-4108-b1ed-25fa02742efb: Exceeded maximum number of RBackends limit: 200&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jul 2021 16:11:29 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sparkr-session-failed-to-initialize/m-p/17781#M11734</guid>
      <dc:creator>User16752239289</dc:creator>
      <dc:date>2021-07-02T16:11:29Z</dc:date>
    </item>
    <item>
      <title>Re: SparkR session failed to initialize</title>
      <link>https://community.databricks.com/t5/data-engineering/sparkr-session-failed-to-initialize/m-p/17782#M11735</link>
      <description>&lt;P&gt;This is due to the when users run their R scripts on Rstudio, the R session is not shut down gracefully. &lt;/P&gt;&lt;P&gt;Databricks is working on handle the R session better and removed the limit. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As a workaround, you can create and run below init script to increase the limit:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;%scala
val initScriptContent = s"""
 |#!/bin/bash
 |cat &amp;gt; /databricks/common/conf/rbackend_limit.conf &amp;lt;&amp;lt; EOL
 |{
 | databricks.daemon.driver.maxNumRBackendsPerDriver = &amp;lt;value&amp;gt;
 |}
 |EOL
""".stripMargin
dbutils.fs.put("dbfs:/&amp;lt;path&amp;gt;/set_rbackend.sh",initScriptContent, true)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jul 2021 16:18:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sparkr-session-failed-to-initialize/m-p/17782#M11735</guid>
      <dc:creator>User16752239289</dc:creator>
      <dc:date>2021-07-02T16:18:23Z</dc:date>
    </item>
  </channel>
</rss>

