09-30-2021 03:54 AM
I have a job running with no issues in Databricks runtime 7.3 LTS. When I upgraded to 8.3 it fails with error An exception was thrown from a UDF: 'pyspark.serializers.SerializationError'... SparkContext should only be created and accessed on the driver
In the notebook I use applyInPandas to apply a UDF to each group. In this UDF I pull data from Snowflake making use of the spark session (spark.read.format(...)) and I understand that is the reason why it fails.
My question is, why was it working in 7.3 LTS and it's not working now? What changed?
Thanks,
10-06-2021 01:18 AM
DBR-8.3 uses SPARK with version 3.1.x. As per migration guide by default it is restricted to use SparkContext inside the executor. You can enable it by using spark.executor.allowSparkContext
In Spark 3.0 and below, SparkContext can be created in executors. Since Spark 3.1, an exception will be thrown when creating SparkContext in executors. You can allow it by setting the configuration spark.executor.allowSparkContext when creating SparkContext in executors.
10-05-2021 10:38 AM
@Nicolas Escobar - could you please share the full error stack trace ?
10-06-2021 01:18 AM
DBR-8.3 uses SPARK with version 3.1.x. As per migration guide by default it is restricted to use SparkContext inside the executor. You can enable it by using spark.executor.allowSparkContext
In Spark 3.0 and below, SparkContext can be created in executors. Since Spark 3.1, an exception will be thrown when creating SparkContext in executors. You can allow it by setting the configuration spark.executor.allowSparkContext when creating SparkContext in executors.
10-10-2021 09:24 AM
To clarify a bit more - in Spark, you can never use a SparkContext or SparkSession within a task / UDF. This has always been true. If it worked before, it's because you were accidentally sending the SparkContext because it was captured in your code, but I guess you never tried to use it. It would have failed. Now it just fails earlier.
The real solution is to change your code to not accidentally hold on to the SparkContext or SparkSession in your UDF.
03-07-2022 03:59 AM
Thanks Sean for your answer, it's clear.
I was just wondering why the code was executing before with no errors and with the expected output but now I understand that this is because there was no restriction before and this changed after the release of Spark 3.1, as Sandeep mentioned.
07-06-2022 07:34 AM
Hi @Sean Owen Thanks for highlighting this. Could you please provide some sample code when you mention "not accidentally hold on to the SparkContext or SparkSession in your UDF". Thanks
07-06-2022 07:39 AM
There are 1000 ways this could happen, so not really, but they're all the same idea: you can't reference the SparkContext or SparkSession object, directly or indirectly in a UDF. Simply, you cannot use it in the UDF code.
03-01-2022 03:33 AM
Adding to @Sean Owen comments, The only reason this is working is that the optimizer is evaluating this locally rather than creating a context on executors and evaluating it.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group