cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks Job : Unable to read Databricks job run parameter in scala code and sql query.

kumarV
New Contributor

we are created data bricks job with Jar (scala code) and provided parameters/jar parameters and able to read those as arguments in main method. 

 we are running job with parameters (run parameters / job parameters) , those parametes are not able to read in scala code and sql query.  

we tried creating widget and getting parameters. it is also not working. 

 Could you please suggest any way to read run parameters/ job parameters. 

1 REPLY 1

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @kumarV , I did some digging and here are some hints/tips to help you further troubleshoot.

Yep — this really comes down to how parameters flow through Lakeflow Jobs depending on the task type. JAR tasks are the odd duck: they don’t get the same “key/value auto-pushdown” behavior that SQL tasks and notebooks do. Here’s the clean mental model and the wiring pattern that will make both your Scala (JAR) and SQL behave predictably.

What’s going on (and why your current approach is failing)

  1. JAR tasks only get positional args

    A JAR task’s entrypoint is main(String[] args). Databricks will only pass what you explicitly put into the JAR task’s Parameters list, and it passes those values as a JSON array -> args[].

    Job parameters are not magically injected into the JAR task unless you reference them there.

Also: widgets are notebook-only. dbutils.widgets is not a JAR task thing, so any “widget style” approach will dead-end.

  1. Job parameters auto-push only into task types that accept key/value params

    Notebook tasks, SQL tasks, Python wheel tasks with keyword args, Run Job tasks, etc. can consume named parameters more naturally.

    JAR tasks don’t, because they’re an array.

  2. SQL parameter syntax depends on where the SQL runs

    SQL task (query/file): use {{parameter_key}}.

    SQL inside a notebook cell: use named params like :year_param, and fetch/set via widgets/job parameters.

    Spark SQL in Scala (inside the JAR): no notebook-style named parameters; read from args and use DataFrame APIs (preferred) or careful string interpolation.

 

How to pass job/run parameters into a JAR task (Scala)

Step 1: Define job parameters (job-level)

Example: input_table, process_date.

This gives you defaults and enables “Run now with different parameters”.

 

Step 2: In the JAR task, explicitly thread them into Parameters using dynamic references

In the UI (JAR task → Parameters), do something like:

–input_table

{{job.parameters.input_table}}

–process_date

{{job.parameters.process_date}}

That forces the resolved values into args[] for that run.

 

Step 3: Parse args in Scala and use them safely

Your sliding(2,2) pattern is totally fine for a minimal parser:

object Main {

def main(args: Array[String]): Unit = {

val params = args.sliding(2, 2).toList.collect { case Array(k, v) => k -> v }.toMap

val inputTable  = params.getOrElse(”–input_table”, sys.error(“Missing –input_table”))

val processDate = params.getOrElse(”–process_date”, sys.error(“Missing –process_date”))

val spark = org.apache.spark.sql.SparkSession.builder().getOrCreate()
import spark.implicits._

val df = spark.table(inputTable).filter($"process_date" === processDate)
df.write.mode("overwrite").saveAsTable(s"${inputTable}_processed")

}

}

Step 4: “Run now with different parameters” just works

When you override job parameters at runtime, those dynamic references expand into new resolved values, and your JAR sees them as args for that run.

How to use parameters in SQL (pick the lane)

A) SQL Task (query/file)

Reference parameters inside the SQL asset using:

SELECT *

FROM {{input_table}}

WHERE process_date = ‘{{process_date}}’;

If the task supports it, the job parameters will be available by key, or you can define task parameters explicitly.

 

B) SQL inside a notebook task

Use widgets + named parameter syntax.

In a SQL cell:

SELECT *

FROM baby_names_prepared

WHERE Year_Of_Birth = :year_param

And in Python/Scala notebook code (not JAR code), you can read:

dbutils.widgets.get(“year_param”)

 

C) Spark SQL inside your Scala JAR

Read from args, then use DataFrame APIs to avoid building fragile SQL strings.

Debugging checklist (the stuff that saves hours)

  • Keys must match exactly (case-sensitive) across job params, task config, and your code.

  • For the JAR task, check run details and confirm Resolved Parameters show the expanded values, and that args order is what your parser expects.

  • Widgets do not apply to JAR tasks. If you see dbutils.widgets in JAR code, that’s the bug.

  • If you need to pass values between tasks, use task values from an upstream notebook and reference them downstream with something like:

    {{tasks.prev_task.values.some_key}}

    inside the JAR Parameters array.

Hope this helps, Louis.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now