Hey so Ive just started to use DAB to automatically mange job configs via CICD
I had a previously-existing job (lets say ID 123) which was created manually and had this config
resources:
jobs:
My_Job_A:
name: My Job A
And I wanted to automatically mange it via DAB. So I created a local yaml file with this config, which changed the job name from My Job A to A Different Name for Job A
targets:
dev:
resources:
jobs:
My_Job_A:
name: A Different Name for Job A
First I had to BIND this target with my existing job, so I ran
databricks bundle deployment bind My_Job_A 123 --target dev --auto-approve
And then I was able to deploy the bundle
databricks bundle deploy --target dev
And it WORKED! My remote job 123 was now called A Different Name for Job A, and all of its previous runs were maintained.
However, its remote config looked like
resources:
jobs:
A_Different_Name_for_Job_A:
name: A Different Name for Job A
and my local config still looked like
targets:
dev:
resources:
jobs:
My_Job_A:
name: A Different Name for Job A
So I wrongfully though it would be best to have the keys matching between local and remote, so I change my local config to look exactly like the remote one:
targets:
dev:
resources:
jobs:
A_Different_Name_for_Job_A:
name: A Different Name for Job A
HOWEVER, after deploying this change, my old job (ID 123) was DELETED and a new job (ID 456) was created in its place
Is this the expected behaviour?
This behaviour seems very dangerous since its not clear how job keys, job names and ids interact (or maybe Ive just not seen the documentation about it?), specially since its not possible to recover deleted jobs on my own