So I figured it out.
You can actually refer existing cluster policies, but I made the mistake thinking all cluster config was added automatically by doing that.
In fact you still have to add some cluster config in the resources yaml:
- spark_version
- spark_conf + custom_tags (for singlenode clusters, see link Szymon posted)
- node_type_id + driver_type_id
When adding those in the yaml, deployment was possible.
I don't know why it works like it does, perhaps it is linked to the policy definition (f.e. optional attributes in the policy), but it would be nice if there was documentation on the requirements here.