You could try enabling fair sharing mode for the SparkContext. In the Databricks workspace, go to Compute and view the configuration for the cluster. Go to Configuration -> Advanced Options -> Spark -> Spark config.
In Spark config for the cluster, add this line:
"spark.scheduler.mode FAIR"
By default, Spark runs jobs using First In First Out prioritization. If there's a large job at the head of the queue, then other later jobs will be delayed while the larger job is executed. This configuration will give a more equal share to all jobs running on the cluster, which makes it ideal for shared clusters. Source: https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application
Another thing you can try is assigning individual jobs to scheduler pools. You can assign higher priority jobs to dedicated pools to ensure they will have compute resources available.
Please refer to this Databricks documentation article on scheduler pools, including a code example: https://docs.databricks.com/en/structured-streaming/scheduler-pools.html
For jobs that are likely to be resource hogs, you could schedule them as workflows and configure separate job clusters to handle the workloads. This will save you Databricks costs since job compute is cheaper than all purpose compute.
Lastly, you may want to consider enabling autoscaling for your cluster if you have not done so already. When the cluster resources are maxed out, the cluster can dynamically spin up more workers as needed. However, this will not scale up the resources of your driver node, so it doesn't solve the problem of queries that overutilize the driver.