<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to deploy an asset bundle job that triggers another one in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-deploy-an-asset-bundle-job-that-triggers-another-one/m-p/108153#M42984</link>
    <description>&lt;LI-CODE lang="markup"&gt; resources:
  jobs:
    my-first-job:
      name: my-first-job
      tasks:
        - task_key: my-first-job-task
          new_cluster:
            spark_version: "13.3.x-scala2.12"
            node_type_id: "i3.xlarge"
            num_workers: 2
          notebook_task:
            notebook_path: ./src/test.py
    my-second-job:
      name: my-second-job
      tasks:
        - task_key: my-second-job-task
          run_job_task:
            job_id: ${resources.jobs.my-first-job.id}&lt;/LI-CODE&gt;
&lt;P&gt;In this example, the &lt;CODE&gt;job_id&lt;/CODE&gt; of &lt;CODE&gt;my-first-job&lt;/CODE&gt; is dynamically referenced in the &lt;CODE&gt;run_job_task&lt;/CODE&gt; of &lt;CODE&gt;my-second-job&lt;/CODE&gt; using &lt;CODE&gt;${resources.jobs.my-first-job.id}&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You could also use condition matching on the script&amp;nbsp;&lt;A href="https://docs.databricks.com/en/dev-tools/bundles/job-task-types.html#ifelse-condition-task" target="_blank"&gt;https://docs.databricks.com/en/dev-tools/bundles/job-task-types.html#ifelse-condition-task&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 31 Jan 2025 17:31:31 GMT</pubDate>
    <dc:creator>NandiniN</dc:creator>
    <dc:date>2025-01-31T17:31:31Z</dc:date>
    <item>
      <title>How to deploy an asset bundle job that triggers another one</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-deploy-an-asset-bundle-job-that-triggers-another-one/m-p/107843#M42931</link>
      <description>&lt;P&gt;Hello everyone.&lt;/P&gt;&lt;P&gt;Using DAB, is there a dynamic value reference or something equivalent to get a job_id to be used inside the YAML definition of another Databricks job? I'd like to trigger that job from another one, but if I'm using a CI/CD pipeline to define/update the Databricks jobs in my workspace, that job_id is unknown at runtime.&lt;/P&gt;&lt;P&gt;For sure I can use some Databricks API and/or the Databricks CLI to get the job done (so, using a placeholder in the YAML file, identifying the correct job_id using API or CLI and then replacing it before deploying), but I was wondering if there is something more handy already.&lt;/P&gt;&lt;P&gt;Thank you. Cheers.&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2025 15:55:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-deploy-an-asset-bundle-job-that-triggers-another-one/m-p/107843#M42931</guid>
      <dc:creator>dc-rnc</dc:creator>
      <dc:date>2025-01-30T15:55:33Z</dc:date>
    </item>
    <item>
      <title>Re: How to deploy an asset bundle job that triggers another one</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-deploy-an-asset-bundle-job-that-triggers-another-one/m-p/108153#M42984</link>
      <description>&lt;LI-CODE lang="markup"&gt; resources:
  jobs:
    my-first-job:
      name: my-first-job
      tasks:
        - task_key: my-first-job-task
          new_cluster:
            spark_version: "13.3.x-scala2.12"
            node_type_id: "i3.xlarge"
            num_workers: 2
          notebook_task:
            notebook_path: ./src/test.py
    my-second-job:
      name: my-second-job
      tasks:
        - task_key: my-second-job-task
          run_job_task:
            job_id: ${resources.jobs.my-first-job.id}&lt;/LI-CODE&gt;
&lt;P&gt;In this example, the &lt;CODE&gt;job_id&lt;/CODE&gt; of &lt;CODE&gt;my-first-job&lt;/CODE&gt; is dynamically referenced in the &lt;CODE&gt;run_job_task&lt;/CODE&gt; of &lt;CODE&gt;my-second-job&lt;/CODE&gt; using &lt;CODE&gt;${resources.jobs.my-first-job.id}&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You could also use condition matching on the script&amp;nbsp;&lt;A href="https://docs.databricks.com/en/dev-tools/bundles/job-task-types.html#ifelse-condition-task" target="_blank"&gt;https://docs.databricks.com/en/dev-tools/bundles/job-task-types.html#ifelse-condition-task&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 31 Jan 2025 17:31:31 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-deploy-an-asset-bundle-job-that-triggers-another-one/m-p/108153#M42984</guid>
      <dc:creator>NandiniN</dc:creator>
      <dc:date>2025-01-31T17:31:31Z</dc:date>
    </item>
  </channel>
</rss>

