<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Parametrize DLT pipeline in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/parametrize-dlt-pipeline/m-p/115829#M45195</link>
    <description>&lt;P&gt;Hello!&lt;/P&gt;&lt;P&gt;To parametrize a Databricks DLT pipeline with a static configuration file using Asset Bundles, include your JSON/YAML config file in the bundle. In your DLT pipeline code, read this file using Python's file I/O (referencing its deployed path). Then, dynamically define your DLT tables using &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table within a loop, passing relevant configuration parameters to each table function to drive ingestion and transformation logic. Ensure your bundle.yaml includes the config file as an artifact for deployment. This allows for declarative configuration, environment management, and version control of your pipeline setup.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 18 Apr 2025 09:23:21 GMT</pubDate>
    <dc:creator>Emmitt18Lefebvr</dc:creator>
    <dc:date>2025-04-18T09:23:21Z</dc:date>
    <item>
      <title>Parametrize DLT pipeline</title>
      <link>https://community.databricks.com/t5/data-engineering/parametrize-dlt-pipeline/m-p/115826#M45193</link>
      <description>&lt;P&gt;If I'm using Databricks Asset Bundles, how would I parametrize a DLT pipeline based on a static configuration file.&lt;/P&gt;&lt;P&gt;In pseudo-code, I would have a .py-file:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;import dlt

# Something that pulls a pipeline resource (or artifact) and parses from JSON
table_configs = get_config(...)

for name, config in table_configs.items():
    &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table(name=name)
    def my_table():
        # do something    &lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;The context is that I have a description of the data to ingest in a declarative file format and I'd like to use Python to pull those descriptions out of an artifact that I've deployed (and perhaps even built) using Databricks Asset Bundles.&lt;/P&gt;</description>
      <pubDate>Fri, 18 Apr 2025 08:11:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/parametrize-dlt-pipeline/m-p/115826#M45193</guid>
      <dc:creator>Malthe</dc:creator>
      <dc:date>2025-04-18T08:11:25Z</dc:date>
    </item>
    <item>
      <title>Re: Parametrize DLT pipeline</title>
      <link>https://community.databricks.com/t5/data-engineering/parametrize-dlt-pipeline/m-p/115829#M45195</link>
      <description>&lt;P&gt;Hello!&lt;/P&gt;&lt;P&gt;To parametrize a Databricks DLT pipeline with a static configuration file using Asset Bundles, include your JSON/YAML config file in the bundle. In your DLT pipeline code, read this file using Python's file I/O (referencing its deployed path). Then, dynamically define your DLT tables using &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table within a loop, passing relevant configuration parameters to each table function to drive ingestion and transformation logic. Ensure your bundle.yaml includes the config file as an artifact for deployment. This allows for declarative configuration, environment management, and version control of your pipeline setup.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 18 Apr 2025 09:23:21 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/parametrize-dlt-pipeline/m-p/115829#M45195</guid>
      <dc:creator>Emmitt18Lefebvr</dc:creator>
      <dc:date>2025-04-18T09:23:21Z</dc:date>
    </item>
  </channel>
</rss>

