PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2020 11:54 PM
I am trying to write a function in Azure databricks. I would like to spark.sql inside the function. But it looks like I cannot use it with worker nodes.
def SEL_ID(value, index):
# some processing on value here
ans = spark.sql("SELECT id FROM table WHERE bin = index")
return ans
spark.udf.register("SEL_ID", SEL_ID)
I am getting the following error:
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
Is there any way I can overcome this? I am using the above function to select from another table.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2021 10:56 AM
Hi there. i guess im a bit late but do you remember how and if you fixed this issue? im getting the same exact problem. @dtr

