Hydra configuration and job parameters of DABs
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hello Community,
I'm trying to create a job pipeline in Databricks that runs a spark_python_task, which executes a Python script configured with Hydra. The script's configuration file defines parameters, such as id.
How can I pass this parameter at the job level in Databricks so that the task picks it up and overrides it using Hydra? And how to use the dbutils.secrets.get through this type of spark_python_task, to retrieve the keys I need?
0 REPLIES 0

