The process for me to build model is:
- filter dataset and split into two datasets
- fit model based on two datasets
- union two datasets
- repeat 1-3 steps
The problem is that after several iterations, the model fitting time becomes longer dramatically, and the I got error message: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 9587 tasks (4.0 GB) is bigger than spark.driver.maxResultSize (4.0 GB). But in fact the data columns and rows stay the same.
As the model fitting time also increases, I don't think increasing spark.driver.maxResultSize will solve this problem. Any suggestion? Thanks.