โ10-08-2024 06:32 AM
Did anyone succeed in using already existing compute policies (created using the UI) in asset bundles for creating a job?
I defined the policy_id in the resources/job yml for the job_cluster, but when deploying I get errors saying spark version is not defined (this is defined in the policy), or other missing parameters (all defined in the policy).
So it seems that the policy is not fetched or applied.
โ10-09-2024 12:04 AM
So I figured it out.
You can actually refer existing cluster policies, but I made the mistake thinking all cluster config was added automatically by doing that.
In fact you still have to add some cluster config in the resources yaml:
- spark_version
- spark_conf + custom_tags (for singlenode clusters, see link Szymon posted)
- node_type_id + driver_type_id
When adding those in the yaml, deployment was possible.
I don't know why it works like it does, perhaps it is linked to the policy definition (f.e. optional attributes in the policy), but it would be nice if there was documentation on the requirements here.
โ10-08-2024 06:52 AM - edited โ10-08-2024 06:52 AM
Hi @-werners- ,
I think you ran into the same kind of issue as the others in below discussion. There is some ongoing issue with TF provider, you can take a look at github thread:
โ10-08-2024 07:06 AM
It looks like it, but I also get errors on non singlenode clusters.
But there might be an underlying issue with policy settings not being applied.
Tnx for the link though.
โ10-09-2024 12:04 AM
So I figured it out.
You can actually refer existing cluster policies, but I made the mistake thinking all cluster config was added automatically by doing that.
In fact you still have to add some cluster config in the resources yaml:
- spark_version
- spark_conf + custom_tags (for singlenode clusters, see link Szymon posted)
- node_type_id + driver_type_id
When adding those in the yaml, deployment was possible.
I don't know why it works like it does, perhaps it is linked to the policy definition (f.e. optional attributes in the policy), but it would be nice if there was documentation on the requirements here.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group