High concurrency cluster just split resource between users more evenly. So when 4 people run notebooks in the same time on cluster with 4 cpu you can imagine that every will get 1 cpu.
In standard cluster 1 person could utilize all worker cpus as your job have multiple partitions (for example 4) so will require multiple cores (1 cpu process 1 partition at a time so all 4 cpus will be busy processing 4 partitions) so other users' jobs will wait in queue till your job is finished.
In standard cluster you can also maintain resource allocations on notebook level using pools. To do that set sparkContext property in first line of notebook:
spark.sparkContext.setLocalProperty("spark.scheduler.pool", "pool1")