Hi!
I am training a Random Forest (pyspark.ml.classification.RandomForestClassifier) on Databricks with 1,000,000 training examples and 25 features. I employ a cluster with one driver (16 GB Memory, 4 Cores), 2-6 workers (32-96 GB Memory, 8-24 Cores), and a 11.3.x-cpu-ml-scala2.12 runtime. I use default values for most hyperparameters, and maxDepth=18 and numTrees=150 (no tuning). Runtime of training is 80 min.
What parameters should I play around with to efficiently speed up training (i.e. w/o wasting resources)? I am already leveraging multiple nodes, right? What about max number of workers, worker type (general purpose, memory optimized, compute optimized, hdd, delta cache accelerated), GPU, spot instances, autoscaling, photon acceleration?
Your input would be highly appreciated!