Cluster Configuration for ML Model Training
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2023 01:28 AM
Hi!
I am training a Random Forest (pyspark.ml.classification.RandomForestClassifier) on Databricks with 1,000,000 training examples and 25 features. I employ a cluster with one driver (16 GB Memory, 4 Cores), 2-6 workers (32-96 GB Memory, 8-24 Cores), and a 11.3.x-cpu-ml-scala2.12 runtime. I use default values for most hyperparameters, and maxDepth=18 and numTrees=150 (no tuning). Runtime of training is 80 min.
What parameters should I play around with to efficiently speed up training (i.e. w/o wasting resources)? I am already leveraging multiple nodes, right? What about max number of workers, worker type (general purpose, memory optimized, compute optimized, hdd, delta cache accelerated), GPU, spot instances, autoscaling, photon acceleration?
Your input would be highly appreciated!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2023 06:38 AM
@John B It can be based on use case, but for ML jobs ( Deep learning etc..), you can go with GPU cluster, as it will be faster compared to normal cluster . as for ML Runtime Photon is not supported . once you are comfortable with final sizing you can schedule as job
![](/skins/images/1C7D039E274DA4E433FB1B1A3EAE173A/responsive_peak/images/icon_anonymous_profile.png)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-31-2023 07:11 PM
Hi @John B
Hope everything is going great.
Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you.
Cheers!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2023 12:43 AM
Hi @Vidula Khanna
Unfortunately no answer was yet provided which helped me to resolve my issue.
John.
![](/skins/images/B38AF44D4BD6CE643D2A527BE673CCF6/responsive_peak/images/icon_anonymous_message.png)
![](/skins/images/B38AF44D4BD6CE643D2A527BE673CCF6/responsive_peak/images/icon_anonymous_message.png)