<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic DLT Runtime Values in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98589#M39747</link>
    <description>&lt;P&gt;When my pipeline runs, I have a need to query a table in the pipeline before I actually create another table. I need to know the target catalog and target schema for the query. I figured the notebook might run automatically in the context of the catalog and schema configured at the pipeline level resulting in me not needing to qualify the table name with catalog and schema. However, that is not the case. I can't seem to locate a way to read in these pipeline configuration values at run-time. Is there a way to do this?&lt;/P&gt;&lt;P&gt;I want to do something like this at run-time of a DLT pipeline:&lt;/P&gt;&lt;P&gt;catalog = spark.conf.get(target_catalog)&lt;/P&gt;&lt;P&gt;schema =&amp;nbsp;spark.conf.get(target_schema)&lt;/P&gt;&lt;P&gt;table_name = "a"&lt;/P&gt;&lt;P&gt;df = spark.sql(f"select * from {catalog}.{schema}.{table_name}")&lt;/P&gt;&lt;P&gt;How do I get the target_catalog and target_schema values at run-time from the pipeline? I've searched high and low for the answer but I've come up empty handed.&lt;/P&gt;&lt;P&gt;Any help is appreciated.&lt;/P&gt;</description>
    <pubDate>Tue, 12 Nov 2024 22:03:51 GMT</pubDate>
    <dc:creator>MarkV</dc:creator>
    <dc:date>2024-11-12T22:03:51Z</dc:date>
    <item>
      <title>DLT Runtime Values</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98589#M39747</link>
      <description>&lt;P&gt;When my pipeline runs, I have a need to query a table in the pipeline before I actually create another table. I need to know the target catalog and target schema for the query. I figured the notebook might run automatically in the context of the catalog and schema configured at the pipeline level resulting in me not needing to qualify the table name with catalog and schema. However, that is not the case. I can't seem to locate a way to read in these pipeline configuration values at run-time. Is there a way to do this?&lt;/P&gt;&lt;P&gt;I want to do something like this at run-time of a DLT pipeline:&lt;/P&gt;&lt;P&gt;catalog = spark.conf.get(target_catalog)&lt;/P&gt;&lt;P&gt;schema =&amp;nbsp;spark.conf.get(target_schema)&lt;/P&gt;&lt;P&gt;table_name = "a"&lt;/P&gt;&lt;P&gt;df = spark.sql(f"select * from {catalog}.{schema}.{table_name}")&lt;/P&gt;&lt;P&gt;How do I get the target_catalog and target_schema values at run-time from the pipeline? I've searched high and low for the answer but I've come up empty handed.&lt;/P&gt;&lt;P&gt;Any help is appreciated.&lt;/P&gt;</description>
      <pubDate>Tue, 12 Nov 2024 22:03:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98589#M39747</guid>
      <dc:creator>MarkV</dc:creator>
      <dc:date>2024-11-12T22:03:51Z</dc:date>
    </item>
    <item>
      <title>Re: DLT Runtime Values</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98607#M39754</link>
      <description>&lt;P&gt;can you set up notebook parameters and pass them in the DLT pipeline?&amp;nbsp;&lt;A href="https://docs.databricks.com/en/jobs/job-parameters.html" target="_blank"&gt;https://docs.databricks.com/en/jobs/job-parameters.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Nov 2024 06:07:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98607#M39754</guid>
      <dc:creator>SparkJun</dc:creator>
      <dc:date>2024-11-13T06:07:28Z</dc:date>
    </item>
    <item>
      <title>Re: DLT Runtime Values</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98638#M39773</link>
      <description>&lt;P&gt;Yes, I can. But, given that I already have these values in the pipeline configuration, it seemed repetitive to configure these same values again as parameters. And, a benefit to reading these values from the pipeline configuration (Destination section) vs job or pipeline advanced configuration parameters is that they cannot be changed in the pipeline (or not changed easily).&lt;/P&gt;&lt;P&gt;Is there no way to read pipeline configuration values like the destination catalog and destination schema at run-time?&lt;/P&gt;</description>
      <pubDate>Wed, 13 Nov 2024 10:05:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/98638#M39773</guid>
      <dc:creator>MarkV</dc:creator>
      <dc:date>2024-11-13T10:05:23Z</dc:date>
    </item>
    <item>
      <title>Re: DLT Runtime Values</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/133794#M49925</link>
      <description>&lt;P&gt;Any thoughts on this? I want to read the default catalog and default schema at runtime and store them in a python variable. I want these values sourced from pipeline settings. spark.conf.getAll() does not work.&lt;/P&gt;&lt;P&gt;Databricks Assistant suggests the following, but this doesn't work either. The error indicates these configs don't exist:&lt;/P&gt;&lt;DIV class=""&gt;&lt;P&gt;&lt;EM&gt;To read the default catalog and default schema from the Lakeflow Declarative Pipeline settings into Python variables, use the following Spark configuration keys:&lt;/EM&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;EM&gt;&lt;SPAN class=""&gt;spark.databricks.sql.initial.catalog&lt;/SPAN&gt;&amp;nbsp;for the default catalog&lt;/EM&gt;&lt;/LI&gt;&lt;LI&gt;&lt;EM&gt;&lt;SPAN class=""&gt;spark.databricks.sql.initial.schema&lt;/SPAN&gt;&amp;nbsp;for the default schema&lt;/EM&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;EM&gt;Here is how you can assign them to Python variables:&lt;/EM&gt;&lt;/P&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;EM&gt;%python&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;default_catalog = spark.conf.get("spark.databricks.sql.initial.catalog")&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;default_schema = spark.conf.get("spark.databricks.sql.initial.schema")&lt;/EM&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;P&gt;&lt;EM&gt;These variables will reflect the catalog and schema set in your pipeline configuration. If you want to provide fallback values, you can use the&amp;nbsp;&lt;SPAN class=""&gt;os.getenv&lt;/SPAN&gt;&amp;nbsp;approach, but the Spark config is the authoritative source for pipeline settings.&lt;/EM&gt;&lt;/P&gt;&lt;/DIV&gt;</description>
      <pubDate>Sat, 04 Oct 2025 15:54:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-runtime-values/m-p/133794#M49925</guid>
      <dc:creator>MarkV</dc:creator>
      <dc:date>2025-10-04T15:54:40Z</dc:date>
    </item>
  </channel>
</rss>

