cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Yml file replacing job cluster with all-purpose cluster when running a workflow

dataengutility
New Contributor III

Hi all,

I have been having some trouble running a workflow that consists of 3 tasks that run sequentially. Task1 runs on an all-purpose cluster and kicks off Task2 that needs to run on a job cluster. Task2 kicks off Task3 which also uses a job cluster.

We have identified that Task2 is running on an all-purpose cluster instead of a job cluster despite configuring the task to run using a job cluster in the yaml file for the asset bundle. This task is dependent on another task which does use the all-purpose cluster as specified in the yaml file. We tried modifying the yaml file but when running a databricks bundle validate, it looks like the task is being overwritten to use the all-purpose cluster despite explicitly indicating it to use the job cluster. Renaming the task names is being picked up by the validate command.

Here is a snippet of the yaml file:

tasks:
โ€ƒโ€ƒ- task_key: Task1
โ€ƒโ€ƒ  existing_cluster_id: all-purpose-cluster-id
โ€ƒโ€ƒ  notebook_task:
โ€ƒโ€ƒ    notebook_path: ../src/Task1.py
โ€ƒโ€ƒ    base_parameters:
โ€ƒโ€ƒ      catalog: ${var.catalog}
โ€ƒโ€ƒ      target: ${var.target}

โ€ƒโ€ƒ- task_key: Task2
โ€ƒโ€ƒ  job_cluster_key: job-cluster
โ€ƒโ€ƒ  depends_on:
โ€ƒโ€ƒ    - task_key: Task1
โ€ƒโ€ƒ  notebook_task:
โ€ƒโ€ƒ    notebook_path: ../src/Task2.py
โ€ƒโ€ƒ    base_parameters:
โ€ƒโ€ƒ      catalog: ${var.catalog}
โ€ƒโ€ƒ      target: ${var.target}

โ€ƒโ€ƒ- task_key: Task3
โ€ƒโ€ƒ  job_cluster_key: job-cluster
โ€ƒโ€ƒ  depends_on:
โ€ƒโ€ƒ    - task_key: Task2
โ€ƒโ€ƒ  notebook_task:
โ€ƒโ€ƒ    notebook_path: ../src/Task3.py
โ€ƒโ€ƒ    base_parameters:
โ€ƒโ€ƒ      catalog: ${var.catalog}
โ€ƒโ€ƒ      target: ${var.target}

After running databricks bundle validate, this is the output:

"tasks": [
          {
            "existing_cluster_id": "all-purpose-cluster-id",
            "notebook_task": {
              "base_parameters": {
                "catalog": "catalog",
                "target": "target"
              },
              "notebook_path": "/Users/user/.bundle/folder/dev/files/src/Task1"
            },
            "task_key": "Task1"
          },
          {
            "depends_on": [
              {
                "task_key": "Task1"
              }
            ],
            "existing_cluster_id": "all-purpose-cluster-id",
            "notebook_task": {
              "base_parameters": {
                "catalog": "catalog",
                "target": "target"
              },
              "notebook_path": "/Users/user/.bundle/folder/dev/files/src/Task2"
            },
            "task_key": "Task2"
          },
          {
            "depends_on": [
              {
                "task_key": "Task2"
              }
            ],
            "existing_cluster_id": "all-purpose-cluster-id",
            "notebook_task": {
              "base_parameters": {
                "catalog": "catalog",
                "target": "target"
              },
              "notebook_path": "/Users/user/.bundle/folder/dev/files/src/Task3"
            },
            "task_key": "Task3"
          }
        ]

As you can see, the all-purpose cluster id is replacing the job-cluster key for Task2 and Task3. The strangest part of all of this is that I'm the only one on the team that is experiencing this issue. Everyone else seems to be able to run the workflow without any issues. Any ideas on how to resolve this issue?

Thank you in advanced!

 

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

My issue is resolved. I had to upgrade my CLI version from v0.215 to v0.221 and everything works fine now. Thank you for your help!

View solution in original post

4 REPLIES 4

jacovangelder
Honored Contributor

I don't know if you've cut off your yaml snippet, but your snippet doesn't show your job cluster with key job-cluster. Just to validate, your job cluster is also defined in your workflow yaml?

Edit: Looking it it again and knowing the defaults, it looks like you're pointing to job_cluster_key "job-cluster". The default is job_cluster (with an underscore instead of a regular hyphen). Could this be your issue? 

Hi, thank you for your response! Yes, I did cut off the yaml snippet to the problem area since the entire yaml is quite a large file. We do define the job-cluster in the workflow yaml. "job-cluster" is just a pseudonym. Sorry for the confusing snippet. Here is the portion where the job cluster is defined:

job_clusters:
        - job_cluster_key: job-cluster
          new_cluster:
            spark_version: 12.2.x-scala2.12
            node_type_id: m5d.4xlarge
            driver_node_type_id: m5d.4xlarge
            data_security_mode: SINGLE_USER
            runtime_engine: PHOTON
            autoscale:
              min_workers: 2
              max_workers: 12
            aws_attributes:
              instance_profile_arn: [removed]
              zone_id: auto
              first_on_demand: 1

That should work just fine. Just tested it on my end. As long as your job_cluster_key value matches with the one in your task, it should work. 

Perhaps you can try to throw away your bundle folder (and perhaps your workflows too) in your workspace and then deploy again. Do keep in mind that run history is purged because of this. It could be that the Terraform state is somehow messed up because of previous faulty deployments. 

My issue is resolved. I had to upgrade my CLI version from v0.215 to v0.221 and everything works fine now. Thank you for your help!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group