05-19-2025 08:42 AM
Hi All,
I have one python notebook(../../config/param_notebook), where all parameters are defined, like:
dbutils.widgets.text( "catalog", "catalog_de")
spark.conf.set( "catalog.name", dbutils.widgets.get( "catalog"))
dbutils.widgets.text( "schema", "emp")
spark.conf.set( "schema.name", dbutils.widgets.get( "schema"))
In another SQL notebook, above parameters are used like below:
%run ../../config/param_notebook
select '${catalog.name}' as catalog,
'${schema.name}' as schema
use ${catalog.name};
select * from ${schema.name}.emp_details;
Since ${param} is deprecated in databricks, what would be the best approach to make minimal changes for all sql notebook to accommodate new changes. Thanks.
06-11-2025 07:15 AM
For your consideration:
Migrating from `${param}` to Named Parameter Markers in Databricks SQL
Background:
Databricks is deprecating the `${param}` syntax for parameter substitution in SQL cells and recommends using the new named parameter marker syntax (e.g., `:parameter_name`) for better compatibility, security, and maintainability. This affects workflows where Python notebooks set parameters using `dbutils.widgets` and `spark.conf.set`, and SQL notebooks reference them using `${param}`.
---
Minimal Change Migration Strategy
1. Update Parameter References in SQL Notebooks
Replace all instances of `${catalog.name}` and `${schema.name}` with the new syntax:
- Instead of `${catalog.name}`, use `:catalog_name`
- Instead of `${schema.name}`, use `:schema_name`
Example Migration:
Before:
```sql
select '${catalog.name}' as catalog, '${schema.name}' as schema
use ${catalog.name};
select * from ${schema.name}.emp_details;
```
After:
```sql
select :catalog_name as catalog, :schema_name as schema
use identifier(:catalog_name);
select * from identifier(:schema_name).emp_details;
```
- Use `identifier(:param)` to safely substitute schema or table names.
---
2. Update Parameter Passing from Python Notebook
Previously, you set parameters using `dbutils.widgets` and `spark.conf.set`. With the new parameter marker syntax, you should ensure that parameters are exposed as notebook widgets, as Databricks will automatically create widgets for any `:parameter_name` used in SQL cells.
You can still use `dbutils.widgets.text` in your config notebook:
```python
dbutils.widgets.text("catalog_name", "catalog_de")
dbutils.widgets.text("schema_name", "emp")
```
- The widgets will appear in the UI when the SQL notebook is run, allowing users to set values interactively.
You do not need to use `spark.conf.set` for parameter passing in SQL anymore.
---
3. No Change Needed for `%run` Imports
Continue using `%run ../../config/param_notebook` at the top of your SQL notebooks to initialize widgets.
---
4. Summary Table: Migration Comparison
| Old Syntax (Deprecated) | New Syntax (Recommended) |
|-------------------------------|----------------------------|
| `${catalog.name}` | `:catalog_name` |
| `${schema.name}` | `:schema_name` |
| `${param}` in SQL | `:param` in SQL |
| `spark.conf.set(...)` for SQL | Use widgets only |
---
5. Additional Notes
- Identifier Wrapping: Use `identifier(:param)` when substituting schema, table, or column names to avoid SQL injection and syntax errors.
- Widget UI: When you use `:catalog_name` in SQL, Databricks automatically provides a widget for user input in the notebook or dashboard UI.
- Automation: Databricks has announced an assistant action to help automate this migration in the future.
---
6. Example: Full Minimal-Change Workflow
Python config notebook (`../../config/param_notebook`):
```python
dbutils.widgets.text("catalog_name", "catalog_de")
dbutils.widgets.text("schema_name", "emp")
```
SQL notebook:
```sql
%run ../../config/param_notebook
select :catalog_name as catalog, :schema_name as schema;
use identifier(:catalog_name);
select * from identifier(:schema_name).emp_details;
```
---
Summarizing:
To accommodate the deprecation of `${param}` in Databricks SQL, replace `${param}` with `:param` in your SQL notebooks, use `identifier(:param)` for dynamic object names, and continue using widgets for parameter definition. This approach requires minimal changes and aligns with Databricks' unified parameter handling.
Cheers, Lou.
05-19-2025 09:14 AM - edited 05-19-2025 09:15 AM
Hi @sensanjoy
Since ${param} syntax is deprecated in Databricks, here are the best approaches to make minimal changes across your SQL notebooks:
Option 1: Using Python Variables with f-strings (Recommended for minimal changes)
In your param_notebook:
# Set up widgets as before
dbutils.widgets.text("catalog", "catalog_de")
dbutils.widgets.text("schema", "emp")
# Create Python variables that can be used in SQL cells
catalog_name = dbutils.widgets.get("catalog")
schema_name = dbutils.widgets.get("schema")
# Optional: Set spark conf as well for backward compatibility
spark.conf.set("catalog.name", catalog_name)
spark.conf.set("schema.name", schema_name)
In your SQL notebook:
%run ../../config/param_notebook
-- Use Python variables in SQL cells
%sql
SELECT '{catalog_name}' as catalog, '{schema_name}' as schema;
USE {catalog_name};
SELECT * FROM {schema_name}.emp_details;
Option 2: Using spark.sql() with Python f-strings
In your SQL notebook:
%run ../../config/param_notebook
# Execute SQL with f-strings
spark.sql(f"USE {catalog_name}")
spark.sql(f"""
SELECT '{catalog_name}' as catalog, '{schema_name}' as schema
""").display()
spark.sql(f"""
SELECT * FROM {schema_name}.emp_details
""").display()
Option 3: Using Session Variables (Databricks SQL native approach)
In your param_notebook:
dbutils.widgets.text("catalog", "catalog_de")
dbutils.widgets.text("schema", "emp")
# Set session variables
spark.sql(f"SET VAR catalog_name = '{dbutils.widgets.get('catalog')}'")
spark.sql(f"SET VAR schema_name = '{dbutils.widgets.get('schema')}'")
In your SQL notebook:
%run ../../config/param_notebook
-- Use session variables
SELECT ${VAR.catalog_name} as catalog, ${VAR.schema_name} as schema;
USE IDENTIFIER(${VAR.catalog_name});
SELECT * FROM IDENTIFIER(CONCAT(${VAR.schema_name}, '.emp_details'));
Option 4: Hybrid Approach (Best for your current setup)
Keep your param_notebook mostly unchanged:
dbutils.widgets.text("catalog", "catalog_de")
spark.conf.set("catalog.name", dbutils.widgets.get("catalog"))
dbutils.widgets.text("schema", "emp")
spark.conf.set("schema.name", dbutils.widgets.get("schema"))
# Add these lines for the new approach
catalog_name = dbutils.widgets.get("catalog")
schema_name = dbutils.widgets.get("schema")
Update SQL notebooks minimally:
%run ../../config/param_notebook
-- Replace ${catalog.name} with {catalog_name}
SELECT '{catalog_name}' as catalog, '{schema_name}' as schema;
USE {catalog_name};
SELECT * FROM {schema_name}.emp_details;
Recommendation
For minimal changes across all notebooks, I recommend Option 4 (Hybrid Approach):
1. Add just 2 lines to your param_notebook
2. Do a find-and-replace across SQL notebooks:
- Replace ${catalog.name} with {catalog_name}
- Replace ${schema.name} with {schema_name}
05-19-2025 09:58 AM
This is regarding opt. 4(Hybrid Approach). Below lines
SELECT {catalog_name} as catalog, {schema_name} as schema;
USE {catalog_name};
would throw syntax error if you try to run after initial setup as you mentioned.
05-19-2025 11:28 AM - edited 05-19-2025 11:28 AM
Hi @sensanjoy
Corrected Option 4: Hybrid Approach with SQL Variables
param_notebook:
dbutils.widgets.text("catalog", "catalog_de")
spark.conf.set("catalog.name", dbutils.widgets.get("catalog"))
dbutils.widgets.text("schema", "emp")
spark.conf.set("schema.name", dbutils.widgets.get("schema"))
# Add these lines to set SQL session variables
spark.sql(f"SET VAR catalog_name = '{dbutils.widgets.get('catalog')}'")
spark.sql(f"SET VAR schema_name = '{dbutils.widgets.get('schema')}'")
SQL notebooks (minimal change):
%run ../../config/param_notebook
-- Replace ${catalog.name} with ${VAR.catalog_name}
SELECT ${VAR.catalog_name} as catalog, ${VAR.schema_name} as schema;
USE IDENTIFIER(${VAR.catalog_name});
SELECT * FROM IDENTIFIER(CONCAT(${VAR.schema_name}, '.emp_details'));
05-20-2025 09:33 AM
First, if we are going to use sql variable, then it needs to be declared first before we set it( declare variable catalog_name string).
Second, the main intention was not to use $ in sql code but that's not same as you explained above. Have you tried to run them in python and sql notebooks that can be demonstrated with successful run!!
05-25-2025 05:09 AM
Can someone has more insight about how we can handle this!!
06-10-2025 09:17 AM
@BigRoux Can you help me to provide your guidance here.
06-11-2025 07:15 AM
For your consideration:
Migrating from `${param}` to Named Parameter Markers in Databricks SQL
Background:
Databricks is deprecating the `${param}` syntax for parameter substitution in SQL cells and recommends using the new named parameter marker syntax (e.g., `:parameter_name`) for better compatibility, security, and maintainability. This affects workflows where Python notebooks set parameters using `dbutils.widgets` and `spark.conf.set`, and SQL notebooks reference them using `${param}`.
---
Minimal Change Migration Strategy
1. Update Parameter References in SQL Notebooks
Replace all instances of `${catalog.name}` and `${schema.name}` with the new syntax:
- Instead of `${catalog.name}`, use `:catalog_name`
- Instead of `${schema.name}`, use `:schema_name`
Example Migration:
Before:
```sql
select '${catalog.name}' as catalog, '${schema.name}' as schema
use ${catalog.name};
select * from ${schema.name}.emp_details;
```
After:
```sql
select :catalog_name as catalog, :schema_name as schema
use identifier(:catalog_name);
select * from identifier(:schema_name).emp_details;
```
- Use `identifier(:param)` to safely substitute schema or table names.
---
2. Update Parameter Passing from Python Notebook
Previously, you set parameters using `dbutils.widgets` and `spark.conf.set`. With the new parameter marker syntax, you should ensure that parameters are exposed as notebook widgets, as Databricks will automatically create widgets for any `:parameter_name` used in SQL cells.
You can still use `dbutils.widgets.text` in your config notebook:
```python
dbutils.widgets.text("catalog_name", "catalog_de")
dbutils.widgets.text("schema_name", "emp")
```
- The widgets will appear in the UI when the SQL notebook is run, allowing users to set values interactively.
You do not need to use `spark.conf.set` for parameter passing in SQL anymore.
---
3. No Change Needed for `%run` Imports
Continue using `%run ../../config/param_notebook` at the top of your SQL notebooks to initialize widgets.
---
4. Summary Table: Migration Comparison
| Old Syntax (Deprecated) | New Syntax (Recommended) |
|-------------------------------|----------------------------|
| `${catalog.name}` | `:catalog_name` |
| `${schema.name}` | `:schema_name` |
| `${param}` in SQL | `:param` in SQL |
| `spark.conf.set(...)` for SQL | Use widgets only |
---
5. Additional Notes
- Identifier Wrapping: Use `identifier(:param)` when substituting schema, table, or column names to avoid SQL injection and syntax errors.
- Widget UI: When you use `:catalog_name` in SQL, Databricks automatically provides a widget for user input in the notebook or dashboard UI.
- Automation: Databricks has announced an assistant action to help automate this migration in the future.
---
6. Example: Full Minimal-Change Workflow
Python config notebook (`../../config/param_notebook`):
```python
dbutils.widgets.text("catalog_name", "catalog_de")
dbutils.widgets.text("schema_name", "emp")
```
SQL notebook:
```sql
%run ../../config/param_notebook
select :catalog_name as catalog, :schema_name as schema;
use identifier(:catalog_name);
select * from identifier(:schema_name).emp_details;
```
---
Summarizing:
To accommodate the deprecation of `${param}` in Databricks SQL, replace `${param}` with `:param` in your SQL notebooks, use `identifier(:param)` for dynamic object names, and continue using widgets for parameter definition. This approach requires minimal changes and aligns with Databricks' unified parameter handling.
Cheers, Lou.
07-29-2025 05:41 AM
Hi all,
I have a SQL notebook that contains the following statement:
CREATE OR REPLACE MATERIALIZED VIEW ${catalog_name}.${schema_name}.emp_table AS
SELECT ...
I’ve configured the values for catalog_name and schema_name as pipeline parameters in my DLT pipeline settings. The notebook is passed to the DLT pipeline to run.
However, since ${param} syntax is now deprecated in Databricks, I’m trying to understand the best and minimal-change approach to update my SQL notebooks for compatibility with current standards—especially within a DLT pipeline context.
Any guidance or best practices would be appreciated!
Thanks
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now