I'm running into an issue when running databricks bundle deploy when using job clusters.
When I run databricks bundle deploy on a new workspace or after destroying previous resources, the deployment fails with the error: Error: cannot update job: At least one EBS volume must be attached for clusters created with node type m8g.xlarge.
This error occurs even though the cluster configuration in my jobs.yml file correctly specifies EBS volumes under aws_attributes, as shown below.
- job_cluster_key: process_to_bronze_cluster
new_cluster:
spark_version: 17.0.x-scala2.13
aws_attributes:
first_on_demand: 1
availability: SPOT_WITH_FALLBACK
instance_profile_arn: <instance_profile>
spot_bid_price_percent: 100
ebs_volume_type: GENERAL_PURPOSE_SSD
ebs_volume_count: 3
ebs_volume_size: 100
node_type_id: m8g.large
driver_node_type_id: m8g.xlarge # This in particular is triggering the error
data_security_mode: SINGLE_USER
runtime_engine: PHOTON
autoscale:
min_workers: 1
max_workers: 4
I don't think it's an issue isolated with the node type but rather a but where it's not detecting the EBS volume attached to this node type. But we have two weird things:
1. I copied this job as a yaml from the Databricks GUI and it's working there fine.
2. This is my second job inside the jobs.yml file. If I comment-out this job and use databricks bundle deploy it will deploy only the first job. Then if I uncomment the second job and deploy, it will work!
So these two factors are clearly learning towards a bug on DABs, right? What is the workaround that I could do so I wait for a fix on this one?
Thanks!