โ05-09-2024 09:49 AM
โ05-15-2024 11:05 AM
I would use widgets in the notebook which will process in Jobs. SQL in Notebooks can use parameters, as would the SQL in the jobs with parameterized queries now supported.
โ06-05-2024 10:33 AM
The solution that worked what adding this python cell to the notebook:
โ05-09-2024 10:57 PM
@SamGreene
Simply write your sql queries as a python variables and then run them through
spark.sql(qry)
โ05-10-2024 04:31 PM
Thanks for the suggestion, but we are using SQL in these notebooks and databricks documentation says COPY INTO supports using the IDENTIFIER function. I need to find a way to parameterize sql notebooks to run them against different catalog/schema.
โ05-15-2024 11:05 AM
I would use widgets in the notebook which will process in Jobs. SQL in Notebooks can use parameters, as would the SQL in the jobs with parameterized queries now supported.
โ06-05-2024 10:33 AM
The solution that worked what adding this python cell to the notebook:
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group