Hi @malterializedvw,
You don’t need Databricks Query objects for this. Have you considered using bundle variables + SQL task parameters on the .sql file? Check this documentation.
The good thing is that sql_task supports a parameters map. Any key you define there can be referenced in your SQL file as {{parameter_key}} and will be substituted by Databricks at run time.
As an example, you do something like this in your query.sql..
CREATE OR REPLACE MATERIALIZED VIEW
{{catalog}}.{{schema}}.orders_mv{{table_suffix}}
AS
SELECT *
FROM {{catalog}}.{{schema}}.orders{{table_suffix}};
That way, you can use the same sql in multiple environments without havng to use sed.
Environment differences can handled via targets + variable overrides.
Does this help?
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.
Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***