Notebooks
These are the easiest assets to back up.
You can export them individually or in bulk as:
To download in bulk:
Navigate to your Workspace folder in Databricks.
Right-click → Export → choose format.
Store in your shared drive or Git repository.
Queries and Dashboards (Databricks SQL Editor / SQL Warehouses)
These are not included in .dbc or workspace ZIP exports.
They are stored separately in Databricks SQL (in the metastore).
You must export them manually or via the SQL REST API:
Open each Query → Copy SQL → Save as .sql file.
For Dashboards, capture also screenshots and export definitions using /api/2.0/preview/sql/dashboards.
Store both in a structured folder (e.g., /sql_exports/queries and /sql_exports/dashboards).
I am assuming all dashboards and queries were created manually in the workspace UI and not deployed via Databricks Asset Bundles (DAB) or automation.
That means they exist only within the workspace metadata, if not backed up before decommissioning, they cannot be recovered later.
Recommendation for the New Environment
Once the new workspace is provisioned:
Adopt Databricks Asset Bundles (DAB)
Use DAB to automate deployments of notebooks, jobs, dashboards, and SQL assets from code.
It provides versioning, repeatable deployments, and environment isolation (Dev → Test → Prod).
All configuration lives as code in YAML — meaning if you lose a workspace, you can rehydrate it in minutes.
Reference: Databricks Asset Bundles Documentation
Move Code to GitHub (or Azure DevOps)
Every notebook, pipeline, or SQL query should live in source control.
Avoid storing logic exclusively inside the Databricks workspace.
Use Git integration to sync notebooks automatically this ensures version tracking, peer review, and rollback capability.
Avoid Manual Notebooks in Production
Use notebooks only for experimentation or prototyping.
For production, all code should be deployed as:
This reduces drift, supports automated testing, and makes your environment reproducible. Hope this will help you!