01-24-2024 12:33 PM
Hello,
I am writing to bring to your attention an issue that we have encountered while working with Databricks and seek your assistance in resolving it.
When running a Job of Workflow with the task "Run Job" and clicking on "View YAML/JSON," we have observed that the parameter of the run_job_task, specifically the job_id, is being versioned. However, we have encountered difficulties when attempting to deploy in different environments, such as "stage" and "production." In these instances, the id is loaded with the value from the laboratory of course, and it’s causing errors when trying to create the job in another envinroments with tools like Terraform or Databricks Assets Bundle, because these jobs maybe exists or not (it will be created if not exist), but the job_id will always be different in different environments:
To perform exactly these actions, run the following command to apply:
terraform apply "prod.plan"
Error: cannot create job: Job 902577056531277 does not exist.
with module.databricks_workflow_job_module["job_father_one"].databricks_job.main,
on modules/databricks_workflow_job/main.tf line 7, in resource "databricks_job" "main":
Error: cannot create job: Job 1068053310953144 does not exist.
with module.databricks_workflow_job_module["job_father_two"].databricks_job.main,
on modules/databricks_workflow_job/main.tf line 7, in resource "databricks_job" "main":
##[error]Bash exited with code '1'.
In this case, the jobs 902577056531277 and 1068053310953144 does not exist in stage and production envinroments. So, in this way, we need to submit one sequential pull request and merge for each layer of "Run Job" task, changing the job_id accordingly to the correct job_id of that job in each environment, which is not an optimal approach.
To address this issue, we propose an alternative approach. Instead of versioning and referencing jobs in the "Run Job" task using job_id, we suggest versioning based on the job_name:
{
"name": "job_father_one",
"email_notifications": {},
...
"tasks": [
{
"task_key": "job_father_one",
"run_if": "ALL_SUCCESS",
"run_job_task": {
"job_name": "job_child_one"
},
"timeout_seconds": 0,
"email_notifications": {},
"notification_settings": {}
},
{
"task_key": "job_father_two",
"run_if": "ALL_SUCCESS",
"run_job_task": {
"job_name": "job_child_two"
},
"timeout_seconds": 0,
"email_notifications": {},
"notification_settings": {}
}
],
"tags": {},
"run_as": {
"user_name": "test@test.com"
}
}
Is that possible? In this way, we don't need to take care with the job_id when sending to stage and production envinroments, because it will make the reference with another jobs by their names, ensuring a smoother experience across different environments.
Thank you for your time and assistance.
Best regards,
Harlem Muniz.
01-25-2024 05:11 AM
Hi @Retired_mod, thank you for your fast response.
However, the versioned JSON or YAML (via Databricks Asset Bundle) in the Job UI should also include the job_name, or we have to change it manually by replacing the job_id with the job_name. For this reason, I didn't open an issue on the Databricks Terraform provider GitHub. I genuinely believe Databricks should make the following changes:
What do you think I should do? Should I open an issue on the terraform-provider-databricks GitHub? Is there anything else I must to do?
Let me know, and I'll take the necessary steps.
09-19-2024 07:55 AM
Hi, is there any update on above issue.
01-14-2025 02:16 AM
Hi,
I have the same problem, I don't understand how I am supposed to use job_id in a good way to create my terraform files. Could you please provide an update, or at least a workaround?
01-14-2025 05:36 AM
Hi , Sorry if I don't understand your usecase, are your trying to start/stop databricks job via terraform? for this reason do you want to harcode job-id??
01-16-2025 06:11 AM
Hi @saurabh18cs ,
In my case we are generating Databricks jobs through Terraform. And for job details we are passing JSON files. We deploy the same JSON in different environments like dev, sit and uat.
But when we have run_job task, it requires the job_id of the Databricks job, and in each environment the same job_name will have the different job_id so here is the issue.
Example:
{
"name": "RUN_JOB_TEST",
"email_notifications": {
"no_alert_for_skipped_runs": false
},
"webhook_notifications": {},
"timeout_seconds": 0,
"max_concurrent_runs": 1,
"tasks": [
{
"task_key": "RUN_JOB_TEST",
"run_if": "ALL_SUCCESS",
"run_job_task": {
"job_id": 370187610293026
},
"timeout_seconds": 0,
"email_notifications": {}
}
],
"queue": {
"enabled": true
},
"run_as": {
"user_name": "abc@xyz.com"
}
}
Now, this configuration will works fine in dev environment but when we deploy the same json in sit, it fails as the job_id value is incorrect.
01-17-2025 12:10 AM
Hi @sid_001 why you need to hardcode job_id anyways to run a task. you shouldn't be specifying any job_id in your json files either. This should be done by job_name and job_id will be autogenerated.
01-20-2025 11:15 PM
Hi,
In Terraform there is no attribute job_name when we have task type run_job.
https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job
01-21-2025 12:33 AM
you can handle this with databricks cli and adding null resource to your terraform.
add following to your devops pipeline:
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group