<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Accessing parameter defined in python notebook into sql notebook. in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119650#M45939</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34319"&gt;@sensanjoy&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Corrected Option 4: Hybrid Approach with SQL Variables&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;param_notebook:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("catalog", "catalog_de")&lt;BR /&gt;spark.conf.set("catalog.name", dbutils.widgets.get("catalog"))&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("schema", "emp")&lt;BR /&gt;spark.conf.set("schema.name", dbutils.widgets.get("schema"))&lt;/P&gt;&lt;P&gt;# Add these lines to set SQL session variables&lt;BR /&gt;spark.sql(f"SET VAR catalog_name = '{dbutils.widgets.get('catalog')}'")&lt;BR /&gt;spark.sql(f"SET VAR schema_name = '{dbutils.widgets.get('schema')}'")&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;SQL notebooks (minimal change):&lt;/STRONG&gt;&lt;BR /&gt;%run ../../config/param_notebook&lt;/P&gt;&lt;P&gt;-- Replace ${catalog.name} with ${VAR.catalog_name}&lt;BR /&gt;SELECT ${VAR.catalog_name} as catalog, ${VAR.schema_name} as schema;&lt;/P&gt;&lt;P&gt;USE IDENTIFIER(${VAR.catalog_name});&lt;/P&gt;&lt;P&gt;SELECT * FROM IDENTIFIER(CONCAT(${VAR.schema_name}, '.emp_details'));&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 19 May 2025 18:28:42 GMT</pubDate>
    <dc:creator>lingareddy_Alva</dc:creator>
    <dc:date>2025-05-19T18:28:42Z</dc:date>
    <item>
      <title>Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119635#M45933</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;I have one python notebook(&lt;SPAN&gt;../../config/param_notebook)&lt;/SPAN&gt;, where all parameters are defined, like:&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;dbutils.widgets.text( "catalog", "catalog_de")&lt;BR /&gt;spark.conf.set( "catalog.name", dbutils.widgets.get( "catalog"))&lt;BR /&gt;&lt;BR /&gt;dbutils.widgets.text( "schema", "emp")&lt;BR /&gt;spark.conf.set( "schema.name", dbutils.widgets.get( "schema"))&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;In another SQL notebook, above parameters are used like below:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;%run ../../config/param_notebook&lt;BR /&gt;&lt;BR /&gt;select '${catalog.name}' as catalog,&lt;BR /&gt;'${schema.name}' as schema&lt;BR /&gt;&lt;BR /&gt;use ${catalog.name};&lt;BR /&gt;select * from ${schema.name}.emp_details;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Since ${param} is deprecated in databricks, what would be the best approach to make minimal changes for all sql notebook to accommodate new changes. Thanks.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 19 May 2025 15:42:22 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119635#M45933</guid>
      <dc:creator>sensanjoy</dc:creator>
      <dc:date>2025-05-19T15:42:22Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119638#M45934</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34319"&gt;@sensanjoy&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since ${param} syntax is deprecated in Databricks, here are the best approaches to make minimal changes across your SQL notebooks:&lt;BR /&gt;&lt;STRONG&gt;Option 1: Using Python Variables with f-strings (Recommended for minimal changes)&lt;/STRONG&gt;&lt;BR /&gt;In your param_notebook:&lt;BR /&gt;# Set up widgets as before&lt;BR /&gt;dbutils.widgets.text("catalog", "catalog_de")&lt;BR /&gt;dbutils.widgets.text("schema", "emp")&lt;/P&gt;&lt;P&gt;# Create Python variables that can be used in SQL cells&lt;BR /&gt;catalog_name = dbutils.widgets.get("catalog")&lt;BR /&gt;schema_name = dbutils.widgets.get("schema")&lt;/P&gt;&lt;P&gt;# Optional: Set spark conf as well for backward compatibility&lt;BR /&gt;spark.conf.set("catalog.name", catalog_name)&lt;BR /&gt;spark.conf.set("schema.name", schema_name)&lt;/P&gt;&lt;P&gt;In your SQL notebook:&lt;BR /&gt;%run ../../config/param_notebook&lt;/P&gt;&lt;P&gt;-- Use Python variables in SQL cells&lt;BR /&gt;%sql&lt;BR /&gt;SELECT '{catalog_name}' as catalog, '{schema_name}' as schema;&lt;/P&gt;&lt;P&gt;USE {catalog_name};&lt;/P&gt;&lt;P&gt;SELECT * FROM {schema_name}.emp_details;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Option 2: Using spark.sql() with Python f-strings&lt;/STRONG&gt;&lt;BR /&gt;In your SQL notebook:&lt;/P&gt;&lt;P&gt;%run ../../config/param_notebook&lt;/P&gt;&lt;P&gt;# Execute SQL with f-strings&lt;BR /&gt;spark.sql(f"USE {catalog_name}")&lt;/P&gt;&lt;P&gt;spark.sql(f"""&lt;BR /&gt;SELECT '{catalog_name}' as catalog, '{schema_name}' as schema&lt;BR /&gt;""").display()&lt;/P&gt;&lt;P&gt;spark.sql(f"""&lt;BR /&gt;SELECT * FROM {schema_name}.emp_details&lt;BR /&gt;""").display()&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Option 3: Using Session Variables (Databricks SQL native approach)&lt;/STRONG&gt;&lt;BR /&gt;In your param_notebook:&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("catalog", "catalog_de")&lt;BR /&gt;dbutils.widgets.text("schema", "emp")&lt;/P&gt;&lt;P&gt;# Set session variables&lt;BR /&gt;spark.sql(f"SET VAR catalog_name = '{dbutils.widgets.get('catalog')}'")&lt;BR /&gt;spark.sql(f"SET VAR schema_name = '{dbutils.widgets.get('schema')}'")&lt;/P&gt;&lt;P&gt;In your SQL notebook:&lt;BR /&gt;%run ../../config/param_notebook&lt;/P&gt;&lt;P&gt;-- Use session variables&lt;BR /&gt;SELECT ${VAR.catalog_name} as catalog, ${VAR.schema_name} as schema;&lt;/P&gt;&lt;P&gt;USE IDENTIFIER(${VAR.catalog_name});&lt;/P&gt;&lt;P&gt;SELECT * FROM IDENTIFIER(CONCAT(${VAR.schema_name}, '.emp_details'));&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Option 4: Hybrid Approach (Best for your current setup)&lt;/STRONG&gt;&lt;BR /&gt;Keep your param_notebook mostly unchanged:&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("catalog", "catalog_de")&lt;BR /&gt;spark.conf.set("catalog.name", dbutils.widgets.get("catalog"))&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("schema", "emp")&lt;BR /&gt;spark.conf.set("schema.name", dbutils.widgets.get("schema"))&lt;/P&gt;&lt;P&gt;# Add these lines for the new approach&lt;BR /&gt;catalog_name = dbutils.widgets.get("catalog")&lt;BR /&gt;schema_name = dbutils.widgets.get("schema")&lt;/P&gt;&lt;P&gt;Update SQL notebooks minimally:&lt;BR /&gt;%run ../../config/param_notebook&lt;/P&gt;&lt;P&gt;-- Replace ${catalog.name} with {catalog_name}&lt;BR /&gt;SELECT '{catalog_name}' as catalog, '{schema_name}' as schema;&lt;/P&gt;&lt;P&gt;USE {catalog_name};&lt;/P&gt;&lt;P&gt;SELECT * FROM {schema_name}.emp_details;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Recommendation&lt;/STRONG&gt;&lt;BR /&gt;For minimal changes across all notebooks, I recommend Option 4 (Hybrid Approach):&lt;BR /&gt;1. Add just 2 lines to your param_notebook&lt;BR /&gt;2. Do a find-and-replace across SQL notebooks:&lt;BR /&gt;- Replace ${catalog.name} with {catalog_name}&lt;BR /&gt;- Replace ${schema.name} with {schema_name}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 19 May 2025 16:15:35 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119638#M45934</guid>
      <dc:creator>lingareddy_Alva</dc:creator>
      <dc:date>2025-05-19T16:15:35Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119643#M45935</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/24053"&gt;@lingareddy_Alva&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is regarding opt. 4(&lt;STRONG&gt;Hybrid Approach). &lt;/STRONG&gt;Below lines&amp;nbsp;&lt;/P&gt;&lt;P&gt;SELECT {catalog_name} as catalog, {schema_name} as schema;&lt;/P&gt;&lt;P&gt;USE {catalog_name};&lt;/P&gt;&lt;P&gt;would throw syntax error if you try to run after initial setup as you mentioned.&lt;/P&gt;</description>
      <pubDate>Mon, 19 May 2025 16:58:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119643#M45935</guid>
      <dc:creator>sensanjoy</dc:creator>
      <dc:date>2025-05-19T16:58:02Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119650#M45939</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34319"&gt;@sensanjoy&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Corrected Option 4: Hybrid Approach with SQL Variables&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;param_notebook:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("catalog", "catalog_de")&lt;BR /&gt;spark.conf.set("catalog.name", dbutils.widgets.get("catalog"))&lt;/P&gt;&lt;P&gt;dbutils.widgets.text("schema", "emp")&lt;BR /&gt;spark.conf.set("schema.name", dbutils.widgets.get("schema"))&lt;/P&gt;&lt;P&gt;# Add these lines to set SQL session variables&lt;BR /&gt;spark.sql(f"SET VAR catalog_name = '{dbutils.widgets.get('catalog')}'")&lt;BR /&gt;spark.sql(f"SET VAR schema_name = '{dbutils.widgets.get('schema')}'")&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;SQL notebooks (minimal change):&lt;/STRONG&gt;&lt;BR /&gt;%run ../../config/param_notebook&lt;/P&gt;&lt;P&gt;-- Replace ${catalog.name} with ${VAR.catalog_name}&lt;BR /&gt;SELECT ${VAR.catalog_name} as catalog, ${VAR.schema_name} as schema;&lt;/P&gt;&lt;P&gt;USE IDENTIFIER(${VAR.catalog_name});&lt;/P&gt;&lt;P&gt;SELECT * FROM IDENTIFIER(CONCAT(${VAR.schema_name}, '.emp_details'));&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 19 May 2025 18:28:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119650#M45939</guid>
      <dc:creator>lingareddy_Alva</dc:creator>
      <dc:date>2025-05-19T18:28:42Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119789#M45973</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/24053"&gt;@lingareddy_Alva&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;First, if we are going to use sql variable, then it needs to be declared first before we set it( declare variable catalog_name string).&lt;/P&gt;&lt;P&gt;Second, the main intention was not to use $ in sql code but that's not same as you explained above. Have you tried to run them in python and sql notebooks that can be demonstrated with successful run!!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 20 May 2025 16:33:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/119789#M45973</guid>
      <dc:creator>sensanjoy</dc:creator>
      <dc:date>2025-05-20T16:33:57Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/120164#M46079</link>
      <description>&lt;P&gt;Can someone has more insight about how we can handle this!!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 25 May 2025 12:09:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/120164#M46079</guid>
      <dc:creator>sensanjoy</dc:creator>
      <dc:date>2025-05-25T12:09:42Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/121355#M46433</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34815"&gt;@Louis_Frolio&lt;/a&gt;&amp;nbsp; Can you help me to provide your guidance here.&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jun 2025 16:17:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/121355#M46433</guid>
      <dc:creator>sensanjoy</dc:creator>
      <dc:date>2025-06-10T16:17:59Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/121471#M46460</link>
      <description>&lt;P&gt;For your consideration:&lt;/P&gt;
&lt;P&gt;Migrating from `${param}` to Named Parameter Markers in Databricks SQL&lt;/P&gt;
&lt;P&gt;Background:&lt;BR /&gt;Databricks is deprecating the `${param}` syntax for parameter substitution in SQL cells and recommends using the new named parameter marker syntax (e.g., `:parameter_name`) for better compatibility, security, and maintainability. This affects workflows where Python notebooks set parameters using `dbutils.widgets` and `spark.conf.set`, and SQL notebooks reference them using `${param}`.&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;Minimal Change Migration Strategy&lt;/P&gt;
&lt;P&gt;1. Update Parameter References in SQL Notebooks&lt;/P&gt;
&lt;P&gt;Replace all instances of `${catalog.name}` and `${schema.name}` with the new syntax:&lt;/P&gt;
&lt;P&gt;- Instead of `${catalog.name}`, use `:catalog_name`&lt;BR /&gt;- Instead of `${schema.name}`, use `:schema_name`&lt;/P&gt;
&lt;P&gt;Example Migration:&lt;/P&gt;
&lt;P&gt;Before:&lt;BR /&gt;```sql&lt;BR /&gt;select '${catalog.name}' as catalog, '${schema.name}' as schema&lt;BR /&gt;use ${catalog.name};&lt;BR /&gt;select * from ${schema.name}.emp_details;&lt;BR /&gt;```&lt;/P&gt;
&lt;P&gt;After:&lt;BR /&gt;```sql&lt;BR /&gt;select :catalog_name as catalog, :schema_name as schema&lt;BR /&gt;use identifier(:catalog_name);&lt;BR /&gt;select * from identifier(:schema_name).emp_details;&lt;BR /&gt;```&lt;BR /&gt;- Use `identifier(:param)` to safely substitute schema or table names.&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;2. Update Parameter Passing from Python Notebook&lt;/P&gt;
&lt;P&gt;Previously, you set parameters using `dbutils.widgets` and `spark.conf.set`. With the new parameter marker syntax, you should ensure that parameters are exposed as notebook widgets, as Databricks will automatically create widgets for any `:parameter_name` used in SQL cells.&lt;/P&gt;
&lt;P&gt;You can still use `dbutils.widgets.text` in your config notebook:&lt;BR /&gt;```python&lt;BR /&gt;dbutils.widgets.text("catalog_name", "catalog_de")&lt;BR /&gt;dbutils.widgets.text("schema_name", "emp")&lt;BR /&gt;```&lt;BR /&gt;- The widgets will appear in the UI when the SQL notebook is run, allowing users to set values interactively.&lt;/P&gt;
&lt;P&gt;You do not need to use `spark.conf.set` for parameter passing in SQL anymore.&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;3. No Change Needed for `%run` Imports&lt;/P&gt;
&lt;P&gt;Continue using `%run ../../config/param_notebook` at the top of your SQL notebooks to initialize widgets.&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;4. Summary Table: Migration Comparison&lt;/P&gt;
&lt;P&gt;| Old Syntax (Deprecated) | New Syntax (Recommended) |&lt;BR /&gt;|-------------------------------|----------------------------|&lt;BR /&gt;| `${catalog.name}` | `:catalog_name` |&lt;BR /&gt;| `${schema.name}` | `:schema_name` |&lt;BR /&gt;| `${param}` in SQL | `:param` in SQL |&lt;BR /&gt;| `spark.conf.set(...)` for SQL | Use widgets only |&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;5. Additional Notes&lt;/P&gt;
&lt;P&gt;- Identifier Wrapping: Use `identifier(:param)` when substituting schema, table, or column names to avoid SQL injection and syntax errors.&lt;BR /&gt;- Widget UI: When you use `:catalog_name` in SQL, Databricks automatically provides a widget for user input in the notebook or dashboard UI.&lt;BR /&gt;- Automation: Databricks has announced an assistant action to help automate this migration in the future.&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;6. Example: Full Minimal-Change Workflow&lt;/P&gt;
&lt;P&gt;Python config notebook (`../../config/param_notebook`):&lt;BR /&gt;```python&lt;BR /&gt;dbutils.widgets.text("catalog_name", "catalog_de")&lt;BR /&gt;dbutils.widgets.text("schema_name", "emp")&lt;BR /&gt;```&lt;/P&gt;
&lt;P&gt;SQL notebook:&lt;BR /&gt;```sql&lt;BR /&gt;%run ../../config/param_notebook&lt;/P&gt;
&lt;P&gt;select :catalog_name as catalog, :schema_name as schema;&lt;BR /&gt;use identifier(:catalog_name);&lt;BR /&gt;select * from identifier(:schema_name).emp_details;&lt;BR /&gt;```&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;Summarizing:&lt;BR /&gt;To accommodate the deprecation of `${param}` in Databricks SQL, replace `${param}` with `:param` in your SQL notebooks, use `identifier(:param)` for dynamic object names, and continue using widgets for parameter definition. This approach requires minimal changes and aligns with Databricks' unified parameter handling.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cheers, Lou.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Jun 2025 14:15:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/121471#M46460</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-06-11T14:15:20Z</dc:date>
    </item>
    <item>
      <title>Re: Accessing parameter defined in python notebook into sql notebook.</title>
      <link>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/126798#M47774</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi all,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I have a SQL notebook that contains the following statement:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CREATE OR REPLACE MATERIALIZED VIEW ${catalog_name}.${schema_name}.emp_table AS&lt;BR /&gt;SELECT ...&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I’ve configured the values for catalog_name and schema_name as pipeline parameters in my DLT pipeline settings. The notebook is passed to the DLT pipeline to run.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;However, since ${param} syntax is now deprecated in Databricks, I’m trying to understand the best and minimal-change approach to update my SQL notebooks for compatibility with current standards—especially within a DLT pipeline context.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Any guidance or best practices would be appreciated!&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 29 Jul 2025 12:41:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/accessing-parameter-defined-in-python-notebook-into-sql-notebook/m-p/126798#M47774</guid>
      <dc:creator>Rupal_P</dc:creator>
      <dc:date>2025-07-29T12:41:11Z</dc:date>
    </item>
  </channel>
</rss>

