At a high-level VACUUM operation on a Delta table has 2 steps.
1) Identifying the stale files based on the VACUUM command triggered.
2) Deleting the files identified in Step 1
The #1 is performed by triggering a Spark job hence utilizes the resource on the cluster. The next step of deleting the files is performed by the Spark driver. Based on the Storage system the speed of deletion could vary. As Spark resources are required for the first step and not utilized in the next step, it's highly recommended to use an auto-scaling cluster with a minimum node as 1. This will help to scale down the cluster after the first step is complete