cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

PySpark UDF is taking long to process

sanjay
Valued Contributor II

Hi,

I have UDF which runs for each spark dataframe row, does some complex processing and return string output. But it takes very long if data is 15000 rows. I have configured cluster with autoscaling, but its not spinning more servers.

Please suggest how to make UDF fasters or any reference implementations.

Regards,

Sanjay

1 ACCEPTED SOLUTION

Accepted Solutions

Lakshay
Databricks Employee
Databricks Employee

Hi @Sanjay Jain​ , Python UDFs are generally slower to process because it runs mostly in the driver which can also lead to OOM errors on Driver. To resolve this issue, please consider the below:

  1. Use spark built-in functions to do the same functionality.
  2. Use pandas UDF instead of python UDFs.
  3. If above 2 options are not suitable, use the configuration : spark.databricks.execution.pythonUDF.arrow.enabled = True

View solution in original post

3 REPLIES 3

pvignesh92
Honored Contributor

@Sanjay Jain​ Hi Sanjay. You did not mention what kind of processing you are doing in UDF. Python UDF definitely will create performance issues as Spark optimizer does not apply optimization on what you are doing within the UDF. Please see if you can do any of those processing using Spark native functions.

If still, you need to use python UDF, see if you can try with Pandas UDF. This can provide significant performance improvements for certain types of operations. Pandas UDFs use Apache Arrow to transfer data between Python and Spark, which can result in faster processing times.

Rishabh-Pandey
Esteemed Contributor

Write ...whether you can perform the same things by using pyspark native logics and functions then no need to use a UDF. Because in most cases we can do by using pyspark also because UDF will definitely create a performance issues ​

Rishabh Pandey

Lakshay
Databricks Employee
Databricks Employee

Hi @Sanjay Jain​ , Python UDFs are generally slower to process because it runs mostly in the driver which can also lead to OOM errors on Driver. To resolve this issue, please consider the below:

  1. Use spark built-in functions to do the same functionality.
  2. Use pandas UDF instead of python UDFs.
  3. If above 2 options are not suitable, use the configuration : spark.databricks.execution.pythonUDF.arrow.enabled = True

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group