<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: spark throws error while using [NOT_IMPLEMENTED] rdd is not implemented. in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/105154#M42018</link>
    <description>&lt;P&gt;Thank you for the response.&lt;/P&gt;&lt;P&gt;As mentioned, it is working fine in the all-purpose compute. Does this mean that I should not use RDD APIs in a job cluster?&lt;BR /&gt;below is my all purpose compute config&lt;/P&gt;&lt;P&gt;"autotermination_minutes": 60,&lt;BR /&gt;"enable_elastic_disk": true,&lt;BR /&gt;"init_scripts": [],&lt;BR /&gt;"single_user_name": "user:mh@dmpa.com",&lt;BR /&gt;"enable_local_disk_encryption": false,&lt;BR /&gt;"data_security_mode": "SINGLE_USER",&lt;BR /&gt;"runtime_engine": "PHOTON",&lt;BR /&gt;"effective_spark_version": "15.4.x-photon-scala2.12",&lt;BR /&gt;"assigned_principal": "user:mh@dmpa.com",&lt;BR /&gt;"cluster_id": "19gu786758qhhjajiiusatu"&lt;/P&gt;</description>
    <pubDate>Fri, 10 Jan 2025 11:30:15 GMT</pubDate>
    <dc:creator>mh7</dc:creator>
    <dc:date>2025-01-10T11:30:15Z</dc:date>
    <item>
      <title>spark throws error while using [NOT_IMPLEMENTED] rdd is not implemented.</title>
      <link>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/104475#M41764</link>
      <description>&lt;P&gt;i am running code in 15.4lts and it works fine in all purpose cluster.&lt;/P&gt;&lt;P&gt;processed_counts &lt;SPAN&gt;=&lt;/SPAN&gt; df&lt;SPAN&gt;.&lt;/SPAN&gt;rdd&lt;SPAN&gt;.&lt;/SPAN&gt;mapPartitions(process_partition)&lt;SPAN&gt;.&lt;/SPAN&gt;reduce(&lt;SPAN class=""&gt;lambda&lt;/SPAN&gt; x, y: x &lt;SPAN&gt;+&lt;/SPAN&gt; y)&lt;/P&gt;&lt;P&gt;when i run the same code using job cluster, it throw's below error. I verfied the cluster setting and it is fine in both the case.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;DIV&gt;&lt;DIV class=""&gt;[NOT_IMPLEMENTED] rdd is not implemented.&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;PySparkNotImplementedError&lt;/SPAN&gt; Traceback (most recent call last) &lt;SPAN class=""&gt;&amp;nbsp;line 150&lt;/SPAN&gt; &lt;SPAN&gt;147&lt;/SPAN&gt; &lt;SPAN&gt;print&lt;/SPAN&gt;(&lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;after repartition &lt;/SPAN&gt;&lt;SPAN class=""&gt;{&lt;/SPAN&gt;df&lt;SPAN&gt;.&lt;/SPAN&gt;count()&lt;SPAN class=""&gt;}&lt;/SPAN&gt;&lt;SPAN&gt; rows.&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;) &lt;SPAN&gt;149&lt;/SPAN&gt; &lt;SPAN class=""&gt;150&lt;/SPAN&gt; processed_counts &lt;SPAN&gt;=&lt;/SPAN&gt; df&lt;SPAN&gt;.&lt;/SPAN&gt;rdd&lt;SPAN&gt;.&lt;/SPAN&gt;mapPartitions(process_partition)&lt;SPAN&gt;.&lt;/SPAN&gt;reduce(&lt;SPAN class=""&gt;lambda&lt;/SPAN&gt; x, y: x &lt;SPAN&gt;+&lt;/SPAN&gt; y)&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 07 Jan 2025 10:02:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/104475#M41764</guid>
      <dc:creator>mh7</dc:creator>
      <dc:date>2025-01-07T10:02:03Z</dc:date>
    </item>
    <item>
      <title>Re: spark throws error while using [NOT_IMPLEMENTED] rdd is not implemented.</title>
      <link>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/104486#M41768</link>
      <description>&lt;P&gt;The error you are encountering, &lt;CODE&gt;[NOT_IMPLEMENTED] rdd is not implemented&lt;/CODE&gt;, is due to the fact that RDD APIs are not supported in certain cluster configurations, specifically in shared clusters or job clusters with certain access modes. Please try the same in single user cluster&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jan 2025 10:59:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/104486#M41768</guid>
      <dc:creator>Walter_C</dc:creator>
      <dc:date>2025-01-07T10:59:25Z</dc:date>
    </item>
    <item>
      <title>Re: spark throws error while using [NOT_IMPLEMENTED] rdd is not implemented.</title>
      <link>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/105154#M42018</link>
      <description>&lt;P&gt;Thank you for the response.&lt;/P&gt;&lt;P&gt;As mentioned, it is working fine in the all-purpose compute. Does this mean that I should not use RDD APIs in a job cluster?&lt;BR /&gt;below is my all purpose compute config&lt;/P&gt;&lt;P&gt;"autotermination_minutes": 60,&lt;BR /&gt;"enable_elastic_disk": true,&lt;BR /&gt;"init_scripts": [],&lt;BR /&gt;"single_user_name": "user:mh@dmpa.com",&lt;BR /&gt;"enable_local_disk_encryption": false,&lt;BR /&gt;"data_security_mode": "SINGLE_USER",&lt;BR /&gt;"runtime_engine": "PHOTON",&lt;BR /&gt;"effective_spark_version": "15.4.x-photon-scala2.12",&lt;BR /&gt;"assigned_principal": "user:mh@dmpa.com",&lt;BR /&gt;"cluster_id": "19gu786758qhhjajiiusatu"&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2025 11:30:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/105154#M42018</guid>
      <dc:creator>mh7</dc:creator>
      <dc:date>2025-01-10T11:30:15Z</dc:date>
    </item>
    <item>
      <title>Re: spark throws error while using [NOT_IMPLEMENTED] rdd is not implemented.</title>
      <link>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/105180#M42029</link>
      <description>&lt;P&gt;Ok, but your all purpose cluster is set up with Single User mode which is indeed supported for the RDD, can you confirm your job cluster is also created by using Single user mode?&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2025 14:07:14 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/spark-throws-error-while-using-not-implemented-rdd-is-not/m-p/105180#M42029</guid>
      <dc:creator>Walter_C</dc:creator>
      <dc:date>2025-01-10T14:07:14Z</dc:date>
    </item>
  </channel>
</rss>

