I am trying to write a function in Azure databricks. I would like to spark.sql inside the function. But it looks like I cannot use it with worker nodes.
def SEL_ID(value, index):
# some processing on value here
ans = spark.sql("SELECT id FROM table WHERE bin = index")
return ans
spark.udf.register("SEL_ID", SEL_ID)
I am getting the following error:
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
Is there any way I can overcome this? I am using the above function to select from another table.