<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How do I efficiently manage common and environment-specific job parameters in DABs in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-do-i-efficiently-manage-common-and-environment-specific-job/m-p/127183#M47880</link>
    <description>&lt;P class=""&gt;&lt;FONT color="#000000"&gt;I have a scenario where my Databricks asset bundles require two types of job parameters:&lt;/FONT&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;FONT color="#000000"&gt;Common parameters&amp;nbsp;that apply to all environments&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT color="#000000"&gt;Environment-specific parameters&amp;nbsp;that differ per environment.&lt;/FONT&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P class=""&gt;&lt;FONT color="#000000"&gt;My current YAML setup is structured like this:&lt;BR /&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# common_parameters.yaml
common_parameters: &amp;amp;common_parameters
  - name: job_var
    default: "common job var value"
  - name: retry_count
    default: 3 # example common param
  - name: timeout_seconds
    default: 600 # example common param

# env_params.yaml
dev_parameters: &amp;amp;dev_parameters
  - name: redshift_host
    default: ${var.redshift_host}
  - name: test_param_dev # new param for testing in dev
    default: "test value dev"

prod_parameters: &amp;amp;prod_parameters
  - name: redshift_host
    default: ${var.redshift_host}
  - name: test_param_prod # new param for testing in prod
    default: "test value prod"

# notifications.yaml
dev_notifications: &amp;amp;dev_notifications
  email_notifications:
    on_failure:
      - "dev-team@example.com"

prod_notifications: &amp;amp;prod_notifications
  email_notifications:
    on_failure:
      - "prod-alerts@example.com"

# schedules.yaml
prod_schedule: &amp;amp;prod_schedule
  schedule:
    quartz_cron_expression: "0 0 0 * * ?"
    timezone_id: "UTC"
    pause_status: "PAUSED"

# main job definition and targets
resources:
  jobs:
    dummy_job:
      name: "dummy_workflow"
      edit_mode: "EDITABLE"
      tasks:
        - task_key: ingestion
          notebook_task:
            notebook_path: ../src/ingestion/test.py
            source: "WORKSPACE"

targets:
  dev:
    resources:
      jobs:
        dummy_job:
          parameters: *dev_parameters

  prod:
    resources:
      jobs:
        dummy_job:
          parameters: *prod_parameters&lt;/LI-CODE&gt;&lt;P class=""&gt;How do I include or merge the common parameters with the environment-specific parameters within the parameters list and&amp;nbsp;is there any best practices to manage this kind of parameter reuse in Databricks asset bundle YAMLs?&lt;/P&gt;&lt;P class=""&gt;&lt;FONT color="#000000"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 01 Aug 2025 13:09:40 GMT</pubDate>
    <dc:creator>azam-io</dc:creator>
    <dc:date>2025-08-01T13:09:40Z</dc:date>
    <item>
      <title>How do I efficiently manage common and environment-specific job parameters in DABs</title>
      <link>https://community.databricks.com/t5/data-engineering/how-do-i-efficiently-manage-common-and-environment-specific-job/m-p/127183#M47880</link>
      <description>&lt;P class=""&gt;&lt;FONT color="#000000"&gt;I have a scenario where my Databricks asset bundles require two types of job parameters:&lt;/FONT&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;FONT color="#000000"&gt;Common parameters&amp;nbsp;that apply to all environments&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT color="#000000"&gt;Environment-specific parameters&amp;nbsp;that differ per environment.&lt;/FONT&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P class=""&gt;&lt;FONT color="#000000"&gt;My current YAML setup is structured like this:&lt;BR /&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# common_parameters.yaml
common_parameters: &amp;amp;common_parameters
  - name: job_var
    default: "common job var value"
  - name: retry_count
    default: 3 # example common param
  - name: timeout_seconds
    default: 600 # example common param

# env_params.yaml
dev_parameters: &amp;amp;dev_parameters
  - name: redshift_host
    default: ${var.redshift_host}
  - name: test_param_dev # new param for testing in dev
    default: "test value dev"

prod_parameters: &amp;amp;prod_parameters
  - name: redshift_host
    default: ${var.redshift_host}
  - name: test_param_prod # new param for testing in prod
    default: "test value prod"

# notifications.yaml
dev_notifications: &amp;amp;dev_notifications
  email_notifications:
    on_failure:
      - "dev-team@example.com"

prod_notifications: &amp;amp;prod_notifications
  email_notifications:
    on_failure:
      - "prod-alerts@example.com"

# schedules.yaml
prod_schedule: &amp;amp;prod_schedule
  schedule:
    quartz_cron_expression: "0 0 0 * * ?"
    timezone_id: "UTC"
    pause_status: "PAUSED"

# main job definition and targets
resources:
  jobs:
    dummy_job:
      name: "dummy_workflow"
      edit_mode: "EDITABLE"
      tasks:
        - task_key: ingestion
          notebook_task:
            notebook_path: ../src/ingestion/test.py
            source: "WORKSPACE"

targets:
  dev:
    resources:
      jobs:
        dummy_job:
          parameters: *dev_parameters

  prod:
    resources:
      jobs:
        dummy_job:
          parameters: *prod_parameters&lt;/LI-CODE&gt;&lt;P class=""&gt;How do I include or merge the common parameters with the environment-specific parameters within the parameters list and&amp;nbsp;is there any best practices to manage this kind of parameter reuse in Databricks asset bundle YAMLs?&lt;/P&gt;&lt;P class=""&gt;&lt;FONT color="#000000"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 01 Aug 2025 13:09:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-do-i-efficiently-manage-common-and-environment-specific-job/m-p/127183#M47880</guid>
      <dc:creator>azam-io</dc:creator>
      <dc:date>2025-08-01T13:09:40Z</dc:date>
    </item>
    <item>
      <title>Re: How do I efficiently manage common and environment-specific job parameters in DABs</title>
      <link>https://community.databricks.com/t5/data-engineering/how-do-i-efficiently-manage-common-and-environment-specific-job/m-p/132761#M49616</link>
      <description>&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;To merge common parameters with environment-specific parameters in Databricks Asset Bundle (DAB) YAMLs, the most effective approach is to adopt modular YAML files, leveraging the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;include&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;directive and hierarchical overrides within the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;targets&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;mapping. This supports both efficient parameter reuse and scalable environment management.&lt;/P&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Merging Parameters: Structure and Inclusion&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Define common parameters (or variables) in a separate YAML file, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;variables/common.yml&lt;/CODE&gt;, which contains settings shared across all environments.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Environment-specific variables should each reside in dedicated files, like&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;variables/&amp;lt;workflow&amp;gt;.dev.yml&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;variables/&amp;lt;workflow&amp;gt;.test.yml&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;variables/&amp;lt;workflow&amp;gt;.prod.yml&lt;/CODE&gt;, etc.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;In your main&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;databricks.yml&lt;/CODE&gt;, use the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;include&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;directive to bring in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;common.yml&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and the specific environment file required for deployment. Update the included environment-specific file either manually or via automation (e.g., with a CI/CD script).&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Here's some example code (text)&amp;nbsp;&lt;/P&gt;
&lt;P&gt;include:&lt;BR /&gt;- resources/**/*.yml&lt;BR /&gt;- variables/common.yml&lt;BR /&gt;- variables/&amp;lt;workflow&amp;gt;.dev.yml # Switch to test.yml, prod.yml, etc. as needed&lt;/P&gt;</description>
      <pubDate>Mon, 22 Sep 2025 18:50:37 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-do-i-efficiently-manage-common-and-environment-specific-job/m-p/132761#M49616</guid>
      <dc:creator>mark_ott</dc:creator>
      <dc:date>2025-09-22T18:50:37Z</dc:date>
    </item>
  </channel>
</rss>

