spark throws error while using [NOT_IMPLEMENTED] rdd is not implemented.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-07-2025 02:02 AM
i am running code in 15.4lts and it works fine in all purpose cluster.
processed_counts = df.rdd.mapPartitions(process_partition).reduce(lambda x, y: x + y)
when i run the same code using job cluster, it throw's below error. I verfied the cluster setting and it is fine in both the case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-07-2025 02:59 AM
The error you are encountering, [NOT_IMPLEMENTED] rdd is not implemented
, is due to the fact that RDD APIs are not supported in certain cluster configurations, specifically in shared clusters or job clusters with certain access modes. Please try the same in single user cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2025 03:30 AM
Thank you for the response.
As mentioned, it is working fine in the all-purpose compute. Does this mean that I should not use RDD APIs in a job cluster?
below is my all purpose compute config
"autotermination_minutes": 60,
"enable_elastic_disk": true,
"init_scripts": [],
"single_user_name": "user:mh@dmpa.com",
"enable_local_disk_encryption": false,
"data_security_mode": "SINGLE_USER",
"runtime_engine": "PHOTON",
"effective_spark_version": "15.4.x-photon-scala2.12",
"assigned_principal": "user:mh@dmpa.com",
"cluster_id": "19gu786758qhhjajiiusatu"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2025 06:07 AM
Ok, but your all purpose cluster is set up with Single User mode which is indeed supported for the RDD, can you confirm your job cluster is also created by using Single user mode?

