With AWS/Azure Autoscaling, how do we fine tune spark jobs?
With the recommended autoscaling, e.g, https://docs.databricks.com/clusters/cluster-config-best-practices.html, setting; is it possible to dynamically set a fine tuned spark job, given that the number of executors could be changing at any time?
- 1036 Views
- 1 replies
- 0 kudos
Latest Reply
@Andrew Fogarty​ I would suggest you instead of dynamic add that thing in the spark cluster itself by that you can save cost
- 0 kudos