How to solve cluster break down due to GC when training a pyspark.ml Random Forest
I am trying to train and optimize a random forest. At first the cluster handles the garbage collection fine, but after a couple of hours the cluster breaks down as Garbage Collection has gone up significantly.The train_df has a size of 6,365,018 reco...
- 2949 Views
- 3 replies
- 2 kudos
Latest Reply
The cache is expensive and wants to save that data to memory and disk (id there is no more space left in memory). I know that, in theory, it should improve, but it can make things worse. I would just putscaled_train_data = pipeline_data.transform(tra...
- 2 kudos