- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2024 09:49 AM
- Labels:
-
Delta Lake
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2024 11:05 AM
I would use widgets in the notebook which will process in Jobs. SQL in Notebooks can use parameters, as would the SQL in the jobs with parameterized queries now supported.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-05-2024 10:33 AM
The solution that worked what adding this python cell to the notebook:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2024 10:57 PM
@SamGreene
Simply write your sql queries as a python variables and then run them through
spark.sql(qry)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2024 04:31 PM
Thanks for the suggestion, but we are using SQL in these notebooks and databricks documentation says COPY INTO supports using the IDENTIFIER function. I need to find a way to parameterize sql notebooks to run them against different catalog/schema.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2024 11:05 AM
I would use widgets in the notebook which will process in Jobs. SQL in Notebooks can use parameters, as would the SQL in the jobs with parameterized queries now supported.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-05-2024 10:33 AM
The solution that worked what adding this python cell to the notebook:

