cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to configure DAB bundles to run serverless

mlivshutz
New Contributor II

I am following the guidelines in https://docs.databricks.com/aws/en/dev-tools/bundles/jobs-tutorial to setup the job for serverless. It says to "omit the job_clusters configuration from the bundle configuration file." It sounds like the idea is to simply omit any mention of compute in the task configuration, and then Databricks will set the job for Serverless.

However, when I run bundle validate or bundle deploy, I get an error that I need to specify one of: job_cluster_key, environment_key, existing_cluster_id, new_cluster.

What do I need to do to enable serverless in the DAB configuration?

Note: this project is managed by Poetry, so that's why the DAB .yml file resides in the "resources" folder.

"Error: Missing required cluster or environment settings
at resources.jobs.dbx_backfill_emotion_job.tasks[0]
in resources/dbx_backfill_emotion_job.yml:38:11
databricks.yml:26:15

Task "dbx_backfill_emotion_main" requires a cluster or an environment to run.
Specify one of the following fields: job_cluster_key, environment_key, existing_cluster_id, new_cluster."

2 REPLIES 2

ashraf1395
Valued Contributor III

Hey @mlivshutz ,
Right we just need to omit the job compute - can you make sure you are using latest databricks cli , if possible can you share an example of your databricks.yml file.
This might help -  https://docs.databricks.com/aws/en/dev-tools/bundles/resource-examples#job-that-uses-serverless-comp...
Also make sure that your databricks cli is latest version

mlivshutz
New Contributor II

Hi, @ashraf1395 , 
Thank you for looking at my question. My cli is 0.243, which is current as of today (3/17/25).
The task definition within resources/dbx_backfill_emotion_job.yml:

tasks:
        - task_key: dbx_backfill_base_fields_x_1
          # job_cluster_key: job_cluster
          python_wheel_task:
            package_name: dbx_backfill_emotion
            entry_point: main
            named_parameters: &parameter_stub
              source_path: x_smpl10_part_*
              dbx_schemas_type: base_fields_schema
              backfill_date_range: 2024-12-01 UPTO_BUT_EXCLUDING 2025-01-01
              target_catalog: ${bundle.environment}_data_warehouse
              target_schema: ${workspace.current_user.short_name}
              target_table: x_base_fields_backfill_v2_dbxschema
          libraries: &library_stub
            # By default we just include the .whl file generated for the dbx_backfill_emotion package.
            # See https://docs.databricks.com/dev-tools/bundles/library-dependencies.html
            # for more information on how to add other libraries.
            - whl: ../dist/*.whl

 databricks.yml. I am deploying to dev target.

targets:
  # The 'dev' target, for development purposes. This target is the default. 
  dev:
    # We use 'mode: development' to indicate this is a personal development copy:
    # - Deployed resources get prefixed with '[dev my_user_name]'
    # - Any job schedules and triggers are paused by default
    # - The 'development' mode is used for Delta Live Tables pipelines
    # Your job runs as your service principal so any GCP/AWS permissions need to be assigned to you
    mode: development
    default: true
    resources:
      jobs:
        dbx_backfill_emotion_job:
          run_as:
            user_name: ${workspace.current_user.userName}
          # Override default settings here.
          # tasks:
          #   - task_key: dbx_backfill_emotion_main
          #     python_wheel_task:
          #       named_parameters:
          #         target_schema: ${workspace.current_user.short_name}
    workspace:
      host: [redacted]

 

 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group