Hello @Abser786,
There is a difference between Dynamic Resource Allocation and the Scheduler policy
Dynamic Resource Allocation means getting more compute as needed if current compute is totally consumed, this can be achieved by autoscaling feature/config in job cluster resources (details here)
On the other hand scheduler policy which seems to be different across tasks in the case you mentioned can be controlled and aligned, I think a best approach if autoscaling is not leveraged in this case a FAIR scheduler can be used in the two tasks, this can be done by setting spark.scheduler.mode
to FAIR
in the job or cluster configuration to ensure the tasks use the FAIR scheduling mode rather than the default FIFO
Regards