cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How does Job Cluster knows how many resources to assign to an Application ?

DBEnthusiast
New Contributor III

Hi All Enthusiasts !

As per my understanding when a user submits an application in spark cluster it specifies how much memory, executors etc. it would need . 

But in Data bricks notebooks we never specify that anywhere. If we have submitted the notebook in a Job cluster how does Data Bricks Resource Manager decides how much it will  allocate resources to this one 

In a cluster having pool I understand we have idle resources which can be allocated as a cluster but still don't understand how much on notebook will be assigned resources

2 REPLIES 2

btafur
Databricks Employee
Databricks Employee

When you create a Job, you specify a Cluster Configuration with the amount of memory, CPU, nodes, etc, for the cluster: https://docs.databricks.com/en/workflows/jobs/create-run-jobs.html

The notebook will run on the cluster following those configurations.

BilalAslamDbrx
Databricks Employee
Databricks Employee

@DBEnthusiast great question! Today, with Job Clusters, you have to specify this. As @btafur note, you do this by setting CPU, memory etc. We are in early preview of Serverless Job Clusters where you no longer specify this configuration, instead Databricks figures this out using the workload's requirements.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group