Does "databricks bundle deploy" clean up old files?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2023 01:43 PM
I'm looking at this page (Databricks Asset Bundles development work tasks) in the Databricks documentation.
When repo assets are deployed to a databricks workspace, it is not clear if the "databricks bundle deploy" will remove files from the target workspace that aren't in the source repo. For example, if a repo contained a notebook named "test1.py" and had been deployed, but then "test1.py" was removed from the repo and a new notebook "test2.py" was created, what is the content of the target workspace after? I believe it will contain both "test1.py" and "test2.py".
Secondly, the description of "databricks bundle destroy" does not indicate that it would remove all files from the workspace - only that it will remove all the artifacts referenced by the bundle. So when the "test1.py" file has been removed from the repo, and the "databricks bundle destroy" is run, will it only remove "test2.py" (which has not yet been deployed)?
I am trying to determine how to ensure that the shared workspace contains only the files that are in the repo - that whatever I do in a release pipeline, I will only have the latest assets in the workspace that are in the repo, and none of the old files that were previously in the repo.
The semantics of "databricks bundle deploy" (in particular the term "deploy") would indicate to me that it should do a clean up of assets in the target workspace as part of the deployment.
But if that is not the case, then if I did a "databricks bundle destroy" prior to the "databricks bundle deploy", would that adequately clean up the target workspace? Or do I need to do something with "databricks fs rm" to delete all the files in the target workspace folder prior to the bundle deploy?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-18-2024 04:18 PM
With thew newer Datbricks CLI (v0.215.0) this seems to be broken. Now I can't destroy a bundle if it doesn't exist - it used to be idempotent. Now I get this error (shortned my deploy area to <ws> below:
Starting plan computation
Planning complete and persisted at <ws>/dab-stage/pytest/.databricks/bundle/new-cluster/terraform/plan
No resources to destroy in plan. Skipping destroy!
Error: open <ws>/dab-stage/pytest/.databricks/bundle/new-cluster/terraform/terraform.tfstate: no such file or directory
make: *** [test-on-cluster] Error 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-27-2024 09:54 AM
Will you add a synchronization option that does not remove existing jobs and pipelines?
We are using DAB for DBT and generally it works well, however, lifecycling models is a bit of a issue at the moment 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2024 08:21 AM
Quick update on this: Now if you remove a file locally (or from GIT in the case of CI/CD) and run "bundle deploy" from the CLI, it will remove the corresponding file from your Databricks workspace.
e.g.
1. Add new file locally, run "bundle deploy"
2. File appears in Databricks workspace
3. Remove file locally, run "bundle deploy"
4. File is removed automatically from the Databricks workspace
Therefore, I don't think there's a need to manually do a cleanup of files.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2025 12:56 PM
@jgraham0325 What CLI version are you using?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2025 09:00 AM
I'm using v0.240.0
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2023 06:53 AM
One further question:
- The purpose of “databricks bundle destroy” is to remove all previously-deployed jobs, pipelines, and artifacts that are defined in the bundle configuration files.
Which bundle configuration files? The ones in the repo? Or are there bundle configuration files in the target workspace location that are used? If the previous version of the bundle contained a reference to test1.py and it has been deployed to a shared workspace, and the new version of the repo no longer contains test1.py, will the destroy command remove test1.py from the shared workspace?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-12-2024 03:33 AM
xhead I think the configuration files it's referring to is the local ones in your repo. It checks these against what has been deployed in the workspace and will remove anything that you've got rid of in you repo in the new version. Behind the scenes it uses a terraform state file to keep track of what has been deployed, which is saved in the workspace along with your other files in the bundle.
In your example, yes it should remove test1.py from the shared workspace.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-13-2025 11:57 PM
@JamesGraham that makes sense depending on the workflow that was implemented. When deploying bundles from a local clone of the repo, the tfstate will be local and (hopefully) kept intact, and then the behavior will be what you describe.
But what happens when databricks bundle is issued from inside a CI/CD pipeline on an ephemeral environment? The .tfstate in that ephemeral env will be lost at the end of the pipeline, and then, if a newer version is later deployed with changes to the bundle definition, any previously deployed resource that got removed would be abandoned in the environment instead of cleaned up.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2025 09:18 AM
@RobertoBruno The tfstate used is actually the one stored in the Databricks Workspace, not on the local filesystem. So providing you keep using the same root_path for your DAB, it should still correctly clean-up any jobs you remove from your code in GIT.
e.g. root_path for staging should be fixed for each DAB, and only the Service Principle running the CI process should be able to run this process:
staging:
workspace:
host: https://adb-123456789.1.azuredatabricks.net/
root_path: /Workspace/DAB/${bundle.name}/${bundle.target}
A way to demonstrate this is:
- Create new job in DAB locally
- Run bundle deploy
- Job appears in Databricks Workspace's list of jobs
- Delete new job locally
- Delete local .databricks folder (containing local tfstate). This simulates a new CI/CD run on a fresh build agent.
- Run bundle deploy
- Result: Job is deleted from the Databricks Workspace
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2025 02:49 AM
Similar issue. Databricks bundle is issued from inside a CI/CD pipeline. If we rename a job, the old job will not be deleted in test or production workspaces. How do we fix it, optimally the job would be the same with a new name, but the alternative is that at least the old job would be deleted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2025 09:25 AM
@pernilak what are you using for the root_path in your test and production workspaces?
When renaming a job, it should create a new job with the new name and delete the old job.
This is providing:
- root_path is kept the same
- only 1 user is doing the deployment, ideally a service principle

