cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Job fails after runtime upgrade

NicolasEscobar
New Contributor II

I have a job running with no issues in Databricks runtime 7.3 LTS. When I upgraded to 8.3 it fails with error An exception was thrown from a UDF: 'pyspark.serializers.SerializationError'... SparkContext should only be created and accessed on the driver

In the notebook I use applyInPandas to apply a UDF to each group. In this UDF I pull data from Snowflake making use of the spark session (spark.read.format(...)) and I understand that is the reason why it fails.

My question is, why was it working in 7.3 LTS and it's not working now? What changed?

Thanks,

1 ACCEPTED SOLUTION

Accepted Solutions

User16763506586
Contributor

DBR-8.3 uses SPARK with version 3.1.x. As per migration guide by default it is restricted to use SparkContext inside the executor. You can enable it by using spark.executor.allowSparkContext

In Spark 3.0 and below, SparkContext can be created in executors. Since Spark 3.1, an exception will be thrown when creating SparkContext in executors. You can allow it by setting the configuration spark.executor.allowSparkContext when creating SparkContext in executors.

View solution in original post

7 REPLIES 7

shan_chandra
Databricks Employee
Databricks Employee

@Nicolas Escobar​ - could you please share the full error stack trace ?

User16763506586
Contributor

DBR-8.3 uses SPARK with version 3.1.x. As per migration guide by default it is restricted to use SparkContext inside the executor. You can enable it by using spark.executor.allowSparkContext

In Spark 3.0 and below, SparkContext can be created in executors. Since Spark 3.1, an exception will be thrown when creating SparkContext in executors. You can allow it by setting the configuration spark.executor.allowSparkContext when creating SparkContext in executors.

sean_owen
Databricks Employee
Databricks Employee

To clarify a bit more - in Spark, you can never use a SparkContext or SparkSession within a task / UDF. This has always been true. If it worked before, it's because you were accidentally sending the SparkContext because it was captured in your code, but I guess you never tried to use it. It would have failed. Now it just fails earlier.

The real solution is to change your code to not accidentally hold on to the SparkContext or SparkSession in your UDF.

Thanks Sean for your answer, it's clear.

I was just wondering why the code was executing before with no errors and with the expected output but now I understand that this is because there was no restriction before and this changed after the release of Spark 3.1, as Sandeep mentioned.

Hi @Sean Owen​ Thanks for highlighting this. Could you please provide some sample code when you mention "not accidentally hold on to the SparkContext or SparkSession in your UDF". Thanks

There are 1000 ways this could happen, so not really, but they're all the same idea: you can't reference the SparkContext or SparkSession object, directly or indirectly in a UDF. Simply, you cannot use it in the UDF code.

User16873042682
New Contributor II

Adding to @Sean Owen​  comments, The only reason this is working is that the optimizer is evaluating this locally rather than creating a context on executors and evaluating it.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group