cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Environment-Specific Schemas in SQL Files

DineshOjha
New Contributor III

Hi Databricks Community,

We are looking for the recommended best practice for managing environment-specific configurations (catalogs and schemas) within our SQL files when using Databricks Asset Bundles.

Our Setup:

  • We use Databricks Asset Bundles to package and deploy our projects.

  • Our bundle includes a Python wrapper that reads and executes a series of .sql files using spark.sql().

  • These SQL files define our views and transformations.

The Challenge:
Our SQL files often need to join tables across different schemas. For example:

-- FILE: ./src/sql/create_my_view.sql

CREATE OR REPLACE VIEW dev_catalog.notebooks_dev_schema.final_view AS
SELECT    t1.*,
    t2.*
FROM    dev_catalog.notebooks_dev_schema.first_notebook_table t1JOIN    dev_catalog.new_dev_schema.new_table t2 
    ON t1.id = t2.id;

We have two conflicting goals:

  1. Developer Experience: We want our data engineers to be able to copy-paste and run this SQL directly in a notebook to test and debug it easily. This means the catalog and schema names should be present in the file.

  2. Automated Deployment: When we deploy the bundle to a different environment (e.g., 'stage'), we need these names to change automatically. For example:

    • dev_catalog -> stage_catalog

    • notebooks_dev_schema -> notebooks_stage_schema

    • new_dev_schema -> new_stage_schema

Question:

What is the officially recommended or most effective pattern in Databricks to handle this? How can we balance the need for runnable, testable SQL with the need for automated environment configuration during deployment with Bundles?

We've considered the below approaches

1. Simple string replacement in our Python wrapper, but that makes the base SQL files non-runnable.

2. Using USE CATALOG and USE SCHEMA, but that doesn't solve for joins across multiple schemas.

Is there a feature within Asset Bundles or a design pattern that elegantly solves this?

Thanks in advance for your insights

1 REPLY 1

lingareddy_Alva
Esteemed Contributor

Hi @DineshOjha 

The best approach is parameterized SQL with widget-based defaults in your Python wrapper, wired to DABs target variables. 

Why this works on both fronts: Engineers run the notebook interactively and widget defaults kick in (dev values). In automated deployments, base_parameters from DABs override the widgets — no code changes, no separate files, no manual find-and-replace.

The core idea is to write your SQL files as templates using ${variable} placeholders (which DABs natively support in databricks.yml), but to also ship a companion "dev defaults" file that lets engineers run the SQL directly.

Hope this will help you @DineshOjha .

 

 

 

LR