cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Specifying cluster on running a job

Tjadi
New Contributor III

Hi,

Let's say that I am starting jobs with different parameters at a certain time each day in the following manner:

response = requests.post(
"https://%s/api/2.0/jobs/run-now" % (DOMAIN),
headers={"Authorization": "Bearer %s" % TOKEN}, json={
            "job_id": job_id,
            "notebook_params": {
                "country_name": str(country_id),
            }
        })
 

I was wondering how I could go about specifying a specific cluster size for a run of a workflow? And how do you specify that the cluster should be shared among the tasks in the workflow? This could be interesting when you have one country_id for which a bigger cluster is needed compared to all other countries and other similar use-cases.

Thanks in advance.

2 REPLIES 2

karthik_p
Esteemed Contributor

@Tjadi Peetersโ€‹ You can select option Autoscaling/Enhanced Scaling in workflows which will scale based on workload

Tjadi
New Contributor III

Thanks for your reply. The autoscaling the functionality I am aware of only scales the amount of workers - or is there another one? I am looking to start jobs with different types of workers (i.e. one of the jobs starts with a m5d.2xlarge while the other has m5d.4xlarge).

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group