Hi @carlos_tasayco,
When you use dbutils.notebook.run
, it expects the target to be a notebook that can accept parameters and execute code within the Databricks environment. This function does not directly support running .sql
files. Instead, you should place your SQL code inside a Databricks notebook (e.g., a .py
file) and then use dbutils.notebook.run
to execute that notebook.
Create a new Databricks notebook (e.g., run_sql_notebook.py
) and place your SQL code inside it. For example:
inputEnvironment = dbutils.widgets.get("environment")
sql_query = f"""
DROP TEMPORARY VARIABLE IF EXISTS strEnv;
DECLARE VARIABLE strEnv STRING;
SET VARIABLE strEnv = '{inputEnvironment}';
"""
spark.sql(sql_query)