cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

user standard serverless with asset bundle on Azure

SanjeevPrasad
New Contributor II

Anyone running into issues with using standard serverless with Asset bundle 

we tried all options with below line 

      performance_target: STANDARD
but it ignore above value and uses performance optimized cluster which is not expected 
any lead with right config would be helpful 
1 ACCEPTED SOLUTION

Accepted Solutions

Thank You @Louis_Frolio  - Looks like databricks cli version was the culprit and post update of the same, I was able to create standard serverless cluster using databricks asset bundle 

View solution in original post

3 REPLIES 3

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @SanjeevPrasad ,

I did a bit of digging and pulled together a few pointers to help guide you here.

When you’re working with serverless jobs in Databricks Asset Bundles, the performance_target flag lives at the job level — not on the task or the cluster. Azure Databricks is pretty explicit about this, and the YAML needs to reflect it.

Here’s the pattern that tends to work reliably:

resources:
  jobs:
    my_job:
      name: my_job_name
      performance_target: STANDARD
      tasks:
        - task_key: my_task
          notebook_task:
            notebook_path: ./notebooks/my_notebook.py
          environment_key: default

  environments:
    - environment_key: default
      spec:
        environment_version: '2'

A quick rundown on the values:

• STANDARD: Cost-optimized with a bit more startup latency (think ~4–6 minutes).

• PERFORMANCE_OPTIMIZED (or just omit the flag): Faster startup and runtime for time-sensitive jobs.

A few gotchas I see trip folks up:

  1. Make sure performance_target is defined directly under the job, not tucked under tasks or cluster definitions.

  2. For serverless notebook tasks, either skip clusters entirely or just point to an environment as in the example.

  3. Use a reasonably current Databricks CLI (0.257.0+). Older versions don’t fully support the newer serverless settings.

  4. Remember this is mostly driven through the API and Bundles; the UI won’t always show a matching toggle.

One last note: Delta Live Tables is its own world — pipelines use the UI checkbox for “Performance optimized,” not the performance_target field.

Hope this helps nudge things in the right direction. Let me know what you find.

Regards, Louis.

Thank You @Louis_Frolio  - Looks like databricks cli version was the culprit and post update of the same, I was able to create standard serverless cluster using databricks asset bundle 

Hubert-Dudek
Esteemed Contributor III
resources:
  jobs:
    my_dabs:
      performance_target: STANDARD

Please check whether it is on the correct level in the YAML. Also consider updating the CLI. I've just tested it, and it worked properly.