cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to deploy an asset bundle job that triggers another one

dc-rnc
New Contributor II

Hello everyone.

Using DAB, is there a dynamic value reference or something equivalent to get a job_id to be used inside the YAML definition of another Databricks job? I'd like to trigger that job from another one, but if I'm using a CI/CD pipeline to define/update the Databricks jobs in my workspace, that job_id is unknown at runtime.

For sure I can use some Databricks API and/or the Databricks CLI to get the job done (so, using a placeholder in the YAML file, identifying the correct job_id using API or CLI and then replacing it before deploying), but I was wondering if there is something more handy already.

Thank you. Cheers.

1 ACCEPTED SOLUTION

Accepted Solutions

NandiniN
Databricks Employee
Databricks Employee
 resources:
  jobs:
    my-first-job:
      name: my-first-job
      tasks:
        - task_key: my-first-job-task
          new_cluster:
            spark_version: "13.3.x-scala2.12"
            node_type_id: "i3.xlarge"
            num_workers: 2
          notebook_task:
            notebook_path: ./src/test.py
    my-second-job:
      name: my-second-job
      tasks:
        - task_key: my-second-job-task
          run_job_task:
            job_id: ${resources.jobs.my-first-job.id}

In this example, the job_id of my-first-job is dynamically referenced in the run_job_task of my-second-job using ${resources.jobs.my-first-job.id}.

 

You could also use condition matching on the script https://docs.databricks.com/en/dev-tools/bundles/job-task-types.html#ifelse-condition-task

View solution in original post

1 REPLY 1

NandiniN
Databricks Employee
Databricks Employee
 resources:
  jobs:
    my-first-job:
      name: my-first-job
      tasks:
        - task_key: my-first-job-task
          new_cluster:
            spark_version: "13.3.x-scala2.12"
            node_type_id: "i3.xlarge"
            num_workers: 2
          notebook_task:
            notebook_path: ./src/test.py
    my-second-job:
      name: my-second-job
      tasks:
        - task_key: my-second-job-task
          run_job_task:
            job_id: ${resources.jobs.my-first-job.id}

In this example, the job_id of my-first-job is dynamically referenced in the run_job_task of my-second-job using ${resources.jobs.my-first-job.id}.

 

You could also use condition matching on the script https://docs.databricks.com/en/dev-tools/bundles/job-task-types.html#ifelse-condition-task

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group