cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Is it possible to passthrough job's parameters to variable?

del1000
New Contributor III

Scenario:

I tried to run notebook_primary as a job with same parameters' map. This notebook is orchestrator for notebooks_sec_1, notebooks_sec_2, and notebooks_sec_3 and next. I run them by dbutils.notebook.run(path, timeout, arguments) function.

So how to get in notebook_primary all input parameters become from Job's configuration and pass them to notebooks_sec_... like i.e.:

arg = some_magic_function_gathering_all_actual_input_params()
 
#
# some iteration on arg
#
 
nb1 = dbutils.notebook.run('./notebooks_sec_1', 0, arg)
 
nb2= dbutils.notebook.run('./notebooks_sec_2', 0, arg)
 
nb3 = dbutils.notebook.run('./notebooks_sec_3', 0, arg)
 

Now I can't iterate through input params, I can get value when I know name of the parameter.

Thank you in advance for any advice.

1 ACCEPTED SOLUTION

Accepted Solutions

Dan_Z
Databricks Employee
Databricks Employee

Oh, I see what you are looking for. Yes- totally possible. Here would be your primary notebook code:

all_args = dbutils.notebook.entry_point.getCurrentBindings()
 
print(all_args)
 
for arg in all_args:
  print(arg)
 
nb1 = dbutils.notebook.run('./notebooks_sec_1', 0, all_args)

View solution in original post

8 REPLIES 8

Dan_Z
Databricks Employee
Databricks Employee

Very possible. You just use dbutils.widgets.get().

For instance I set up a job with the following param:

{
  "foo": "bar"
}

 The primary notebook:

the_arg = dbutils.widgets.get("foo")
 
print(the_arg)
 
nb1 = dbutils.notebook.run('./notebooks_sec_1', 0, {"foo" : the_arg})

notebooks_sec_1:

the_arg = dbutils.widgets.get("foo")
 
print(the_arg)

Then, when I ran it, both printed: "bar".

del1000
New Contributor III

Sorry but it is not the answer for my question.

In notebook_primary, I don't know all names of node of arguments. I'd like to iterate arguments, do some change within it and send to process to each notebook_sec_* at all.

The question is: is the notebook able to know arguments which has been used to run it? Precisely, is it possible to write a below code?

for arg_key in arguments.keys():
 print(arg_key)

Dan_Z
Databricks Employee
Databricks Employee

Oh, I see what you are looking for. Yes- totally possible. Here would be your primary notebook code:

all_args = dbutils.notebook.entry_point.getCurrentBindings()
 
print(all_args)
 
for arg in all_args:
  print(arg)
 
nb1 = dbutils.notebook.run('./notebooks_sec_1', 0, all_args)

del1000
New Contributor III

Thank you, Dan. That is exactly what I wanted 🙂 By the way, where are these property and method of dbutils described? I can't find any references to them.

673602
New Contributor II

We have exactly the same requirement but we were looking for a similar possibility using Scala code. We were not able to find any function close to getCurrentBindings() in scala.

del1000
New Contributor III

@Balbir Singh​ , I'm newbie in Databricks but the manual says you can use a python cell and transfer variables to scala's cell by temp tables.

https://docs.databricks.com/notebooks/notebook-workflows.html#pass-structured-data

nnalla
New Contributor II

d

nnalla
New Contributor II

I am using getCurrentBindings(), but it returns an empty dictionary even though I passed parameters. I am running it in a scheduled workflow job

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group