cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Serverless job error - spark.rpc.message.maxSize

adurand-accure
New Contributor II

Hello, 

I am facing this error when moving a Workflow to serverless mode

ERROR : SparkException: Job aborted due to stage failure: Serialized task 482:0 was 269355219 bytes, which exceeds max allowed: spark.rpc.message.maxSize (268435456 bytes). Consider increasing spark.rpc.message.maxSize or using broadcast variables for large values.

on JOB cluster we could set the  spark.rpc.message.maxSize manually to a value greater than 268 m, which looks not possible on Serverless


Any help is appreciated, thx

4 REPLIES 4

Alberto_Umana
Databricks Employee
Databricks Employee

Hi @adurand-accure,

In serverless mode, you cannot directly modify the spark.rpc.message.maxSize parameter. To work around this limitation, you can consider the following approaches:

  1. Broadcast Variables: Use broadcast variables for large values. This can help reduce the size of the serialized task by broadcasting large datasets to all nodes instead of including them in the task serialization.
  2. Optimize Data Processing: Break down the data processing into smaller tasks or stages to ensure that the serialized task size does not exceed the limit. This might involve restructuring your data processing logic to handle smaller chunks of data at a time.
  3. Data Partitioning: Ensure that your data is well-partitioned to avoid large partitions that could lead to oversized serialized tasks. You can repartition your data into smaller partitions using the repartition or coalesce methods in Spark.
  4. Review Code for Inefficiencies: Check your code for any inefficiencies that might be causing large task sizes. This could include unnecessary data shuffling, large intermediate data structures, or other factors that contribute to the task size.

adurand-accure
New Contributor II

Hello Alberto, 

Thanks, I already had this answer from the AI assistant and it didn't solved my problem, I am looking here for something different 🙂 

 

Hey @adurand-accure 

Without details how your workflow works it can be hard to help. If the job fails on workflow part where you process large chunks of data then partitions or batches are probably your answer. Are u able to share some details?

adurand-accure
New Contributor II

Hello PiotrMi,
We found out that the problem was caused by a collect() and managed to fix it by changing some code
Thanks for your quick replies
Best regards,
Antoine 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group