Ok i get it.
An execution context is not bound to cpu. It is like a session. So basically the limit of 150 execution contexts mean that 150 sessions/spark programs can run simultaneously on the cluster (whether that is possible on the hardware is another question).
Knowing that, your question is in fact:
if the number of spark tasks is more than the cores available in the driver...
First: the driver does only orchestration, it does not run spark tasks (it does run native python code though, and if you call collect() etc). The workers execute tasks.
If there are more tasks than there are cpus available over all workers, then either extra nodes are created (if you use autoscale), or the tasks wait until resources become available.
This happens a lot actually, especially on tables with lots of partitions (which often exceed the number of cores).
Spark can handle this without any issue.
Timeouts can occur however.