Youโre right that everything is ephemeral on the GitHub runner, but that does not mean โfull redeploy from scratchโ every time in the workspace. The .databricks directory is local state + cache, and the real, durable state lives in the Databricks workspace (in the bundleโs state_path).
What the .databricks directory actually is
On each databricks bundle deploy the CLI creates a .databricks/ folder next to your databricks.yml that holds things like:
- Rendered bundle config (all variables, target overrides, includes resolved).
- Local representation of what was last deployed (resource IDs, mapping between logical names and workspace objects).
- Some caches for faster subsequent commands on the same machine.
That directory is only for the CLI process on that machine. It is not the authoritative โtruthโ of your deployment.
Where the real state lives
In your databricks.yml, under workspace, you have a state_path (or it defaults under workspace.root_path). For example:
workspace:
root_path: /Shared/bundles/my_project
state_path: /Shared/bundles/my_project/.state
Databricks stores bundle state in the workspace under that path (jobs, pipelines, IDs, checksums etc.).
When you run databricks bundle deploy again (even from a fresh VM), the CLI:
Reads your local bundle definition (databricks.yml + included files).
Reads the previous state from the workspace state_path.
Computes a diff and applies only what changed (create/update/delete resources incrementally).โ
So the incremental behavior depends on the workspace state, not on the GitHub runnerโs .databricks directory.