@Yanan Zhang :
As per the documentation you shared, Databricks Task parameter variables are used to parameterize notebook tasks in a Databricks workspace. These variables are used to pass values from the parent notebook to the child notebook that is being executed as a task. However, it appears that Databricks currently only supports a predefined set of whitelisted Task parameter variables, and it does not explicitly mention support for custom variables.
The whitelisted Task parameter variables that are currently supported in Databricks include:
- {{task_id}}: The unique identifier of the task.
- {{task_run_number}}: The number of times the task has been run.
- {{task_run_id}}: The unique identifier of the current task run.
- {{task_retry_number}}: The number of times the task has been retried.
- {{task_max_retries}}: The maximum number of retries allowed for the task.
These variables can be used in notebook cells or in the notebook parameters to dynamically pass values during the execution of notebook tasks.
If you need to use custom variables or parameters in your Databricks tasks, you may need to implement your own logic within the notebooks to handle these custom variables, such as passing them as input parameters or reading them from external sources like configuration files or environment variables. You can use standard Python or Scala code within the notebooks to handle these custom variables as needed.