After several iteration of filter and union, the data is bigger than spark.driver.maxResultSize
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-22-2021 12:36 PM
The process for me to build model is:
- filter dataset and split into two datasets
- fit model based on two datasets
- union two datasets
- repeat 1-3 steps
The problem is that after several iterations, the model fitting time becomes longer dramatically, and the I got error message: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 9587 tasks (4.0 GB) is bigger than spark.driver.maxResultSize (4.0 GB). But in fact the data columns and rows stay the same.
As the model fitting time also increases, I don't think increasing spark.driver.maxResultSize will solve this problem. Any suggestion? Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-22-2021 01:11 PM
I assume that you are using PySpark to train a model? It sounds like you are collecting data on the driver and likely need to increase the size. Can you share any code?

