Environment-Specific Schemas in SQL Files
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi Databricks Community,
We are looking for the recommended best practice for managing environment-specific configurations (catalogs and schemas) within our SQL files when using Databricks Asset Bundles.
Our Setup:
We use Databricks Asset Bundles to package and deploy our projects.
Our bundle includes a Python wrapper that reads and executes a series of .sql files using spark.sql().
These SQL files define our views and transformations.
The Challenge:
Our SQL files often need to join tables across different schemas. For example:
-- FILE: ./src/sql/create_my_view.sql CREATE OR REPLACE VIEW dev_catalog.notebooks_dev_schema.final_view AS SELECT t1.*, t2.* FROM dev_catalog.notebooks_dev_schema.first_notebook_table t1JOIN dev_catalog.new_dev_schema.new_table t2 ON t1.id = t2.id;
We have two conflicting goals:
Developer Experience: We want our data engineers to be able to copy-paste and run this SQL directly in a notebook to test and debug it easily. This means the catalog and schema names should be present in the file.
Automated Deployment: When we deploy the bundle to a different environment (e.g., 'stage'), we need these names to change automatically. For example:
dev_catalog -> stage_catalog
notebooks_dev_schema -> notebooks_stage_schema
new_dev_schema -> new_stage_schema
Question:
What is the officially recommended or most effective pattern in Databricks to handle this? How can we balance the need for runnable, testable SQL with the need for automated environment configuration during deployment with Bundles?
We've considered the below approaches
1. Simple string replacement in our Python wrapper, but that makes the base SQL files non-runnable.
2. Using USE CATALOG and USE SCHEMA, but that doesn't solve for joins across multiple schemas.
Is there a feature within Asset Bundles or a design pattern that elegantly solves this?
Thanks in advance for your insights
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi @DineshOjha
The best approach is parameterized SQL with widget-based defaults in your Python wrapper, wired to DABs target variables.
Why this works on both fronts: Engineers run the notebook interactively and widget defaults kick in (dev values). In automated deployments, base_parameters from DABs override the widgets — no code changes, no separate files, no manual find-and-replace.
The core idea is to write your SQL files as templates using ${variable} placeholders (which DABs natively support in databricks.yml), but to also ship a companion "dev defaults" file that lets engineers run the SQL directly.
Hope this will help you @DineshOjha .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Thank you LR, we tried this approach and it worked but for this we would need to define the variable at multiple places, soch as the databricks.yml, the jobs file and the main code.
So we decided to put all these variables in a single config file which is read by the python file and it would replace the values in the sql with values from config file.
We tried using sql parameters but didnt work.