Hi @DineshOjha
The best approach is parameterized SQL with widget-based defaults in your Python wrapper, wired to DABs target variables.
Why this works on both fronts: Engineers run the notebook interactively and widget defaults kick in (dev values). In automated deployments, base_parameters from DABs override the widgets — no code changes, no separate files, no manual find-and-replace.
The core idea is to write your SQL files as templates using ${variable} placeholders (which DABs natively support in databricks.yml), but to also ship a companion "dev defaults" file that lets engineers run the SQL directly.
Hope this will help you @DineshOjha .
LR