Greetings @SanjeevPrasad ,
I did a bit of digging and pulled together a few pointers to help guide you here.
When you’re working with serverless jobs in Databricks Asset Bundles, the performance_target flag lives at the job level — not on the task or the cluster. Azure Databricks is pretty explicit about this, and the YAML needs to reflect it.
Here’s the pattern that tends to work reliably:
resources:
jobs:
my_job:
name: my_job_name
performance_target: STANDARD
tasks:
- task_key: my_task
notebook_task:
notebook_path: ./notebooks/my_notebook.py
environment_key: default
environments:
- environment_key: default
spec:
environment_version: '2'
A quick rundown on the values:
• STANDARD: Cost-optimized with a bit more startup latency (think ~4–6 minutes).
• PERFORMANCE_OPTIMIZED (or just omit the flag): Faster startup and runtime for time-sensitive jobs.
A few gotchas I see trip folks up:
-
Make sure performance_target is defined directly under the job, not tucked under tasks or cluster definitions.
-
For serverless notebook tasks, either skip clusters entirely or just point to an environment as in the example.
-
Use a reasonably current Databricks CLI (0.257.0+). Older versions don’t fully support the newer serverless settings.
-
Remember this is mostly driven through the API and Bundles; the UI won’t always show a matching toggle.
One last note: Delta Live Tables is its own world — pipelines use the UI checkbox for “Performance optimized,” not the performance_target field.
Hope this helps nudge things in the right direction. Let me know what you find.
Regards, Louis.