> What determines when the cluster autoscaling activates to add and remove workers
During scale-down, the service removes a worker only if it is idle and does not contain any shuffle data. This allows aggressive resizing without killing tasks or recomputing intermediate results . It also scales the cluster up aggressively in response to demand to keep responsiveness high without sacrificing efficiency. More details at https://databricks.com/blog/2018/05/02/introducing-databricks-optimized-auto-scaling.html
>Also, can it be adjusted?
Databricks offers two types of cluster node autoscaling: standard and optimized. Depending on the type, the parameters you could tune are
spark.databricks.aggressiveWindowDownS
spark.databricks.autoscaling.standardFirstStepUp