cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

use job parameters in scripts

N_M
New Contributor III

Hi Community

I made some research, but I wasn't lucky, and I'm a bit surprised I can't find anything about it.

So, I would simply access the job parameters when using python scripts (not notebooks).

My flow doesn't use notebooks, but I still need to drive some parameters that I want to declare before I run the job (so, static).

Here are my attempts so far:

  • the trivial use of widgets does not work. Widgets are available only in notebooks, and thus dbutils.widgets.text/get are out of scope. The functions simply return None
  • I tried to move to environment variables, set at runtime, in which a simple notebook, set as root of the flow, push the job parameter to the environment variables, i.e., os.environ["PARAM"] = dbutils.widgets.text().
    Unfortunately, env variables are *not* propagated to children tasks (probably because interpreters are restarted at each task)
    Limits to workarounds:

Of course there are 10K workarounds, but some are not applicable to my scope, some are really bad practice. I try to list here my limitations:

  • setting environment variables at init scripts does NOT solve my problem. I want a parameter to change before running the job, so same cluster, same flow etc.
  • I want to avoid to create as many job clusters as the parameters, and "pick" the relevant cluster every time i change parameters. Not a good practice in my view
  • I have 20+ scripts, running as DAG flow. They are scripts because they are supposed to run also outside databricks and independently. So I want to avoid the conversion of such scripts to notebooks (this also brings some issues about versioning the code AND the vscode databricks plugin...other topic)
  • I cannot use task parameters. Task parameters depends on the task values, and it doesn't make sense to load a parameter in all my scripts with an hardcoded task (something like dbutils.jobs.taskValues.get(taskKey = "environment_setter", key = "param", default = 42, debugValue = 0), where "environment_setter" is my root task... )
    Any ideas is really appreciated!

Thanks

2 REPLIES 2

daniel_sahal
Esteemed Contributor

@N_M 
From what I see in the documentation, spark_python_task takes a "parameters" as an array of strings, in which you can put your command line parameters using {{job.parameters.[name]}}

xiangzhu
Contributor II

A workaround that I found is to use the databricks jobs api to get the job run info. There are job parameters inside, but you need to prepare crendetials in advance.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!