Hi All Enthusiasts !
As per my understanding when a user submits an application in spark cluster it specifies how much memory, executors etc. it would need .
But in Data bricks notebooks we never specify that anywhere. If we have submitted the notebook in a Job cluster how does Data Bricks Resource Manager decides how much it will allocate resources to this one
In a cluster having pool I understand we have idle resources which can be allocated as a cluster but still don't understand how much on notebook will be assigned resources