03-03-2023 05:32 AM
I have been using "rdd.flatMap(lambda x:x)" for a while to create lists from columns however after I have changed the cluster to a Shared acess mode (to use unity catalog) I get the following error:
py4j.security.Py4JSecurityException: Method public org.apache.spark.rdd.RDD org.apache.spark.api.java.JavaRDD.rdd() is not whitelisted on class class org.apache.spark.api.java.JavaRDD
I have tried to solve the error by adding:
"spark.databricks.pyspark.enablePy4JSecurity false"
however I then get the following error:
"spark.databricks.pyspark.enablePy4JSecurity is not allowed when chossing an access mode"
Does anybody know how to use RDD when using a cluster for unity catalouge?
Thank you!
03-08-2023 07:51 PM
@Christine Pedersen : Would you like to start migrating to dataframes? The DataFrame API is a more modern and optimized way to work with structured data in Spark.
The error you are encountering is related to Py4J security settings in Apache Spark. In Shared access mode, Py4J security is enabled by default for security reasons, which restricts certain methods from being called on the Spark RDD object.
03-08-2023 07:51 PM
@Christine Pedersen : Would you like to start migrating to dataframes? The DataFrame API is a more modern and optimized way to work with structured data in Spark.
The error you are encountering is related to Py4J security settings in Apache Spark. In Shared access mode, Py4J security is enabled by default for security reasons, which restricts certain methods from being called on the Spark RDD object.
03-08-2023 11:16 PM
Hi @Suteja Kanuri,
In this case I am using pyspark dataframe, but I am trying to get alle values from a column in that dataframe and create a list. I am using this list to filter columns in another dataframe. (see example below):
value_list = pysparkDF.select(<column_name>).distinct().rdd.flatMap(lambda x: x).collect()
filtered_table = DF2.filter(DF2.<column_name>.isin(value_list))
But I will try to search for ways to avoid lists and keep it in dataframe format.
11-13-2023 01:35 AM
I get the same error while using repartition command in a shared cluster, works fine with single user cluster. Is there an alternative for that. Any issues with continuing single user cluster
03-12-2023 11:59 PM
@Christine Pedersen :
You can achieve this without collecting data into a list using Spark's built-in DataFrame operations.
You can use the join operation to filter DF2 based on the distinct values in the column from pysparkDF . Here's an example:
filtered_table = DF2.join(
pysparkDF.select(<column_name>).distinct(),
on=DF2.<column_name> == pysparkDF.<column_name>,
how='inner'
)
This code will perform an inner join on DF2 and pysparkDF using the column name, which will effectively filter DF2 based on the distinct values of that column in pysparkDF. Note that this approach will return a new DataFrame rather than a list, which should be more efficient for larger datasets
06-05-2023 10:14 PM
@Suteja Kanuri
let me know if I have to do this rdd.map on a column having json data, and then read it as a json string in pyspark!
how can I do that!!
Sample Syantx for what I'm trying to achieve on a shared cluster with the same error related to "spark.databricks.pyspark.enablePy4JSecurity"
Syntax: spark.read.json(df.rdd.map(lambda x:x[0]))
what will be the optimal alternative for the same!!
05-11-2024 12:30 PM
Hi,
Can you use json.loads instead? Example below -
from pyspark.sql import Row
import json
# Sample JSON data as a list of dictionaries (similar to JSON objects)
json_data_str = response.text
json_data = [json.loads(json_data_str)]
# Convert dictionaries to Row objects
rows = [Row(**json_dict) for json_dict in json_data]
# Create DataFrame from list of Row objects
df = spark.createDataFrame(rows)
# Show the DataFrame
df.display()
08-03-2023 10:36 PM - edited 08-03-2023 10:38 PM
Hi
I have the exact same issue as @Shivanshu_ any help would be highly appreciated.
08-07-2023 07:19 AM
Try this:
# Change column_name to the actual column name:
placeholder_list = spark.sql("select column from table").collect()
desired_list = [row.column_name for row in placeholder_list]
print(desired_list)
4 weeks ago
Thanks, that solved me the issue!
08-22-2023 09:37 AM
Try setting below configuration in databricks notebook, then retry. It should work.
01-02-2024 06:36 PM
this configuration does not work for me. please suggest any other solution. i do need to use rdd.mapPartitions for a data framework created from unity catalog data
02-17-2024 05:05 PM
Hey @283513 were you able to solve this? I am facing the same issue with using vectorAssembler with unity cluster
02-26-2024 05:09 AM
Faced this issue multiple times.
Solution:
1. Don't use Shared Cluster or cluster without Unity Catalog enabled for running 'rdd' queries on Databricks.
2. Instead create a Personal Cluster (Single User) with basic configuration and with Unity Catalog enabled.
3. Also for the new compute cluster in Advanced Options set the following parameters:
Re-run your rdd queries with new compute cluster. It works perfectly well for me.
03-21-2024 04:32 PM
faced with the same issue and working for a company, it is not possible to create a new cluster. do you have any other solution for this issue?
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group