cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Hydra configuration and job parameters of DABs

jeremy98
Contributor III

Hello Community,

I'm trying to create a job pipeline in Databricks that runs a spark_python_task, which executes a Python script configured with Hydra. The script's configuration file defines parameters, such as id.

How can I pass this parameter at the job level in Databricks so that the task picks it up and overrides it using Hydra? And how to use the dbutils.secrets.get through this type of spark_python_task, to retrieve the keys I need?

0 REPLIES 0