Hey, I'm trying to find a way to specify a serverless cluster for the dev environment and job clusters for the test and prod environments in databricks.yml.
The problem is that it seems impossible - I’ve tried many approaches, but the only outcomes I can achieve are either:
A. I don’t specify any cluster, and it runs on Serverless for all three environments.
B. I specify a cluster, but then I can’t explicitly set it to be serverless for dev (it must be any of predefined job clusters).
I’ve searched everywhere but haven’t found any information on how to do this.
Has anyone found a way to make this work?
Here’s the code I wish would work (or something similar), so you know what I mean:
#databricks.yml file
bundle:
name: medallion_architecture
include:
- resources/*.yml
variables:
catalog:
type: string
targets:
dev:
mode: development
default: true
workspace:
host: adb-xxxxxxxxxxxxxx.azuredatabricks.net
variables:
catalog: "dev"
cluster_config: serverless # (this doesn't work)
test:
mode: production
workspace:
host: adb-xxxxxxxxxxxxxx.azuredatabricks.net
root_path: /Shared/.bundle/${bundle.target}/${bundle.name}
variables:
catalog: "test"
cluster_config:
spark_version: "15.4.x-scala2.12"
node_type_id: "Standard_D8s_v3"
autoscale:
min_workers: 2
max_workers: 6
azure_attributes:
availability: "ON_DEMAND_AZURE"
prod:
mode: production
workspace:
host: adb-xxxxxxxxxxxxxx.azuredatabricks.net
root_path: /Shared/.bundle/${bundle.target}/${bundle.name}
run_as:
user_name: ${workspace.current_user.userName}
variables:
catalog: "prod"
cluster_config:
spark_version: "15.4.x-scala2.12"
node_type_id: "Standard_D8s_v3"
autoscale:
min_workers: 2
max_workers: 10
azure_attributes:
availability: "ON_DEMAND_AZURE"