โ06-10-2024 09:51 AM
Hi all,
I have been having some trouble running a workflow that consists of 3 tasks that run sequentially. Task1 runs on an all-purpose cluster and kicks off Task2 that needs to run on a job cluster. Task2 kicks off Task3 which also uses a job cluster.
We have identified that Task2 is running on an all-purpose cluster instead of a job cluster despite configuring the task to run using a job cluster in the yaml file for the asset bundle. This task is dependent on another task which does use the all-purpose cluster as specified in the yaml file. We tried modifying the yaml file but when running a databricks bundle validate, it looks like the task is being overwritten to use the all-purpose cluster despite explicitly indicating it to use the job cluster. Renaming the task names is being picked up by the validate command.
Here is a snippet of the yaml file:
tasks:
โโ- task_key: Task1
โโ existing_cluster_id: all-purpose-cluster-id
โโ notebook_task:
โโ notebook_path: ../src/Task1.py
โโ base_parameters:
โโ catalog: ${var.catalog}
โโ target: ${var.target}
โโ- task_key: Task2
โโ job_cluster_key: job-cluster
โโ depends_on:
โโ - task_key: Task1
โโ notebook_task:
โโ notebook_path: ../src/Task2.py
โโ base_parameters:
โโ catalog: ${var.catalog}
โโ target: ${var.target}
โโ- task_key: Task3
โโ job_cluster_key: job-cluster
โโ depends_on:
โโ - task_key: Task2
โโ notebook_task:
โโ notebook_path: ../src/Task3.py
โโ base_parameters:
โโ catalog: ${var.catalog}
โโ target: ${var.target}
After running databricks bundle validate, this is the output:
"tasks": [
{
"existing_cluster_id": "all-purpose-cluster-id",
"notebook_task": {
"base_parameters": {
"catalog": "catalog",
"target": "target"
},
"notebook_path": "/Users/user/.bundle/folder/dev/files/src/Task1"
},
"task_key": "Task1"
},
{
"depends_on": [
{
"task_key": "Task1"
}
],
"existing_cluster_id": "all-purpose-cluster-id",
"notebook_task": {
"base_parameters": {
"catalog": "catalog",
"target": "target"
},
"notebook_path": "/Users/user/.bundle/folder/dev/files/src/Task2"
},
"task_key": "Task2"
},
{
"depends_on": [
{
"task_key": "Task2"
}
],
"existing_cluster_id": "all-purpose-cluster-id",
"notebook_task": {
"base_parameters": {
"catalog": "catalog",
"target": "target"
},
"notebook_path": "/Users/user/.bundle/folder/dev/files/src/Task3"
},
"task_key": "Task3"
}
]
As you can see, the all-purpose cluster id is replacing the job-cluster key for Task2 and Task3. The strangest part of all of this is that I'm the only one on the team that is experiencing this issue. Everyone else seems to be able to run the workflow without any issues. Any ideas on how to resolve this issue?
Thank you in advanced!
โ06-17-2024 09:57 AM
My issue is resolved. I had to upgrade my CLI version from v0.215 to v0.221 and everything works fine now. Thank you for your help!
โ06-11-2024 06:32 AM - edited โ06-11-2024 06:35 AM
I don't know if you've cut off your yaml snippet, but your snippet doesn't show your job cluster with key job-cluster. Just to validate, your job cluster is also defined in your workflow yaml?
Edit: Looking it it again and knowing the defaults, it looks like you're pointing to job_cluster_key "job-cluster". The default is job_cluster (with an underscore instead of a regular hyphen). Could this be your issue?
โ06-11-2024 11:44 AM
Hi, thank you for your response! Yes, I did cut off the yaml snippet to the problem area since the entire yaml is quite a large file. We do define the job-cluster in the workflow yaml. "job-cluster" is just a pseudonym. Sorry for the confusing snippet. Here is the portion where the job cluster is defined:
job_clusters:
- job_cluster_key: job-cluster
new_cluster:
spark_version: 12.2.x-scala2.12
node_type_id: m5d.4xlarge
driver_node_type_id: m5d.4xlarge
data_security_mode: SINGLE_USER
runtime_engine: PHOTON
autoscale:
min_workers: 2
max_workers: 12
aws_attributes:
instance_profile_arn: [removed]
zone_id: auto
first_on_demand: 1
โ06-11-2024 11:58 AM - edited โ06-12-2024 12:15 AM
That should work just fine. Just tested it on my end. As long as your job_cluster_key value matches with the one in your task, it should work.
Perhaps you can try to throw away your bundle folder (and perhaps your workflows too) in your workspace and then deploy again. Do keep in mind that run history is purged because of this. It could be that the Terraform state is somehow messed up because of previous faulty deployments.
โ06-17-2024 09:57 AM
My issue is resolved. I had to upgrade my CLI version from v0.215 to v0.221 and everything works fine now. Thank you for your help!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group