03-06-2026 02:32 AM
I’m currently working with two workspaces – one for DEV and one for PROD.
I’m trying to understand how I can keep the Genie/Dashboard functionalities in sync (mirrored) between these two environments. What is the best way to organize this workflow?
Ideally, I’d like to develop and test new functionalities in the DEV workspace and then deploy them to PROD once they’re approved. Will an asset bundle also include and migrate the Genie/Dashboard structure from one workspace to another?
I noticed that from the Dashboard section you can export the underlying code as a JSON file, but I’m not sure how this fits into a proper mirroring strategy between DEV and PROD. Our goal is for business users to have access in PROD only to approved dashboards, while we continue working in DEV on optimizations, changes, and updates.
How would you recommend structuring this workflow in Databricks to manage Genie/Dashboard development in DEV and controlled promotion to PROD?
03-06-2026 04:57 PM
Hi @Stanciu_Cristi ,
Great question - this is a very common pattern (DEV to PROD promotion) and Databricks has solid support for dashboards, with Genie space support still catching up. Let me break this down comprehensively.
PART 1: DASHBOARDS - DATABRICKS ASSET BUNDLES (FULLY SUPPORTED)
Yes, Databricks Asset Bundles (DABs) fully support AI/BI dashboards as a managed resource. This is the recommended approach for your DEV-to-PROD workflow.
Here is the end-to-end workflow:
Step 1 - Export your existing dashboard into a bundle definition:
databricks bundle generate dashboard \
--existing-id <dashboard-id> \
--bind
This creates two things:
- A YAML resource definition (e.g., my_dashboard.dashboard.yml)
- The serialized dashboard file (e.g., my_dashboard.lvdash.json)
The --bind flag links the generated config to the existing dashboard so you do not create a duplicate on deploy.
Step 2 - Configure your bundle with targets for DEV and PROD:
bundle:
name: my-dashboard-bundle
variables:
warehouse_id:
description: "SQL Warehouse ID"
catalog:
description: "Target catalog"
schema:
description: "Target schema"
resources:
dashboards:
my_dashboard:
display_name: "Sales Dashboard"
file_path: src/my_dashboard.lvdash.json
warehouse_id: ${var.warehouse_id}
dataset_catalog: ${var.catalog}
dataset_schema: ${var.schema}
embed_credentials: true
targets:
dev:
mode: development
default: true
workspace:
host: https://dev-workspace.cloud.databricks.com
variables:
warehouse_id: "abc123_dev_warehouse"
catalog: "dev_catalog"
schema: "dev_schema"
prod:
mode: production
workspace:
host: https://prod-workspace.cloud.databricks.com
variables:
warehouse_id: "xyz789_prod_warehouse"
catalog: "prod_catalog"
schema: "prod_schema"
permissions:
- user_name: prod-admins@company.com
level: CAN_MANAGE
Key properties to know about:
- warehouse_id: overrides the SQL warehouse per environment
- dataset_catalog / dataset_schema: overrides the default catalog and schema used by all dataset queries in the dashboard, so the same .lvdash.json works across environments without modifying SQL
- embed_credentials: when true, all viewers run queries using the deployer's credentials (useful for PROD so business users do not need direct table access)
- permissions: control who can view/manage the dashboard
Step 3 - Deploy:
databricks bundle deploy --target dev # deploy to DEV
databricks bundle deploy --target prod # deploy to PROD
Step 4 - Keep in sync with --watch:
If someone edits the dashboard in the DEV workspace UI, you can pull those changes back into your bundle:
databricks bundle generate dashboard --resource my_dashboard --watch
This continuously polls for changes and updates your local .lvdash.json file, which you can then commit to Git and deploy to PROD.
Documentation references:
- Bundle resources: https://docs.databricks.com/aws/en/dev-tools/bundles/resources
- Bundle examples with dashboard: https://docs.databricks.com/aws/en/dev-tools/bundles/examples
- CI/CD best practices: https://docs.databricks.com/aws/en/dev-tools/ci-cd/best-practices
- Git support for dashboards: https://docs.databricks.com/aws/en/dashboards/automate/git-support
- Bundle CLI commands (generate): https://docs.databricks.com/aws/en/dev-tools/cli/bundle-commands
PART 2: THE JSON EXPORT AND HOW IT FITS IN
The JSON export you see in the Dashboard UI (the .lvdash.json file) IS the same serialized format that DABs use. So the two approaches work together:
- UI Export: Dashboard menu -> Export -> downloads a .lvdash.json file
- DABs: "bundle generate dashboard" creates the same .lvdash.json programmatically
- UI Import: You can also import a .lvdash.json via the Dashboard menu -> Replace dashboard
For your workflow, I recommend using "bundle generate" rather than manual UI export because it also creates the YAML configuration and can be automated in CI/CD.
One gotcha: the .lvdash.json file contains hardcoded catalog/schema references in the SQL queries. The dataset_catalog and dataset_schema properties in the bundle YAML override the DEFAULT catalog/schema for the dashboard's datasets, but if your SQL queries use fully qualified names like "dev_catalog.dev_schema.my_table", those will NOT be overridden. Best practice is to use unqualified table names in your dashboard queries and let dataset_catalog/dataset_schema handle the environment routing.
Another gotcha: DABs syncs all files in the bundle directory to the workspace. If you have multiple .lvdash.json variants (e.g., one per environment), all of them get uploaded and may create extra dashboards. Keep only ONE .lvdash.json file and use variables for environment differences.
PART 3: GENIE SPACES - NOT YET IN DABS (USE THE REST API)
As of today (March 2026), Genie spaces are NOT a supported resource type in Databricks Asset Bundles. There is an open GitHub issue requesting this:
https://github.com/databricks/cli/issues/3008
There is also a community-contributed PR in progress:
https://github.com/databricks/cli/pull/4191
However, there IS a workaround using the Genie REST API, which is now in Beta/Public Preview. Here is the approach:
Step 1 - Export the Genie space configuration from DEV:
Use the Get Space API to retrieve the serialized space configuration:
GET /api/2.0/genie/spaces/<space_id>
API reference: https://docs.databricks.com/api/workspace/genie/getspace
The response includes the full space configuration: tables, instructions, sample queries, joins, filters, etc. Everything except existing conversation threads.
Step 2 - Create the space in PROD:
Use the Create Space API to create it in the target workspace:
POST /api/2.0/genie/spaces
API reference: https://docs.databricks.com/api/workspace/genie/createspace
Important notes:
- Space title and description are NOT part of the serialized configuration; you set them during creation
- If your catalog/schema names differ between DEV and PROD, you need to adjust the table references in the serialized config before creating
- For updates to an existing PROD space, use the Update Space API
Step 3 - Automate in CI/CD:
You can wrap this in a Python script or use the Databricks SDK. A simple pattern:
from databricks.sdk import WorkspaceClient
dev_client = WorkspaceClient(host="https://dev-workspace...", token="...")
prod_client = WorkspaceClient(host="https://prod-workspace...", token="...")
# Export from DEV
space = dev_client.genie.get_space(space_id="<dev_space_id>")
# Create in PROD (adjust catalog/schema if needed)
prod_client.genie.create_space(
title="My Genie Space",
description="Production genie space",
# pass serialized config from dev space
)
There is also a community tool called "SpaceOps" that wraps this into a CLI for CI/CD:
https://github.com/charotAmine/databricks-spaceops
And a reusable Genie import/export component from Databricks field engineering:
https://github.com/databricks-field-eng/reusable-ip-ai/tree/main/components/genie/genie_import_expor...
PART 4: RECOMMENDED OVERALL WORKFLOW
Here is how I would structure your DEV-to-PROD workflow:
1. Git Repository: Store your bundle configuration (databricks.yml), dashboard files (.lvdash.json), and Genie space export scripts in a single Git repo.
2. DEV Workflow:
- Develop dashboards in the DEV workspace UI
- Use "databricks bundle generate dashboard --watch" to sync changes back to your local repo
- For Genie spaces, develop in DEV workspace UI, then export via API/SDK
- Commit everything to Git
3. Approval Process:
- Use Git pull requests for review/approval before merging to main
- This gives you an audit trail of what changed and who approved it
4. PROD Deployment (via CI/CD - e.g., GitHub Actions):
- On merge to main, run:
databricks bundle deploy --target prod
- For Genie spaces, run your API-based deployment script
- Optionally add a manual approval gate in your CI/CD pipeline
5. Business User Access:
- In PROD, set embed_credentials: true on dashboards so viewers use the deployer's credentials
- Set appropriate permissions so business users can view but not edit
- For Genie spaces, configure permissions via the API or UI in PROD
Example GitHub Actions workflow:
name: Deploy to PROD
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- run: databricks bundle deploy --target prod
env:
DATABRICKS_HOST: ${{ secrets.PROD_HOST }}
DATABRICKS_TOKEN: ${{ secrets.PROD_TOKEN }}
PART 5: KEY GOTCHAS AND LIMITATIONS
1. Warehouse IDs differ between workspaces: Always parameterize warehouse_id using bundle variables. Never hardcode them.
2. Catalog/schema names: Use dataset_catalog and dataset_schema in the bundle config. Avoid fully-qualified table names in dashboard SQL when possible.
3. Dashboard file sync: DABs uploads ALL files in the bundle directory. Keep your directory structure clean with only one .lvdash.json per dashboard.
4. Dev mode naming: In development mode, DABs prepends "[dev username]" to resource names. This helps avoid conflicts but means the dashboard has a different display name in DEV vs PROD.
5. Genie spaces are manual for now: Until native DABs support ships, you need a separate script or tool for Genie space promotion. The Genie Management APIs are in Beta, so there may be breaking changes.
6. Git folder limit: If using Git folders for dashboards, there is a limit of 100 dashboards per Git folder.
7. Publishing: Deploying a dashboard with DABs automatically publishes it. You do not need a separate publish step.
8. Cross-workspace deployment from UI: The "rocket button" deployment in the workspace UI does NOT support deploying to a different workspace. You must use the CLI for cross-workspace deployment.
I hope this helps you set up a solid workflow. The dashboard side with DABs is mature and production-ready. The Genie side requires the API-based workaround for now, but native DABs support is actively being worked on.
Best regards
03-06-2026 04:57 PM
Hi @Stanciu_Cristi ,
Great question - this is a very common pattern (DEV to PROD promotion) and Databricks has solid support for dashboards, with Genie space support still catching up. Let me break this down comprehensively.
PART 1: DASHBOARDS - DATABRICKS ASSET BUNDLES (FULLY SUPPORTED)
Yes, Databricks Asset Bundles (DABs) fully support AI/BI dashboards as a managed resource. This is the recommended approach for your DEV-to-PROD workflow.
Here is the end-to-end workflow:
Step 1 - Export your existing dashboard into a bundle definition:
databricks bundle generate dashboard \
--existing-id <dashboard-id> \
--bind
This creates two things:
- A YAML resource definition (e.g., my_dashboard.dashboard.yml)
- The serialized dashboard file (e.g., my_dashboard.lvdash.json)
The --bind flag links the generated config to the existing dashboard so you do not create a duplicate on deploy.
Step 2 - Configure your bundle with targets for DEV and PROD:
bundle:
name: my-dashboard-bundle
variables:
warehouse_id:
description: "SQL Warehouse ID"
catalog:
description: "Target catalog"
schema:
description: "Target schema"
resources:
dashboards:
my_dashboard:
display_name: "Sales Dashboard"
file_path: src/my_dashboard.lvdash.json
warehouse_id: ${var.warehouse_id}
dataset_catalog: ${var.catalog}
dataset_schema: ${var.schema}
embed_credentials: true
targets:
dev:
mode: development
default: true
workspace:
host: https://dev-workspace.cloud.databricks.com
variables:
warehouse_id: "abc123_dev_warehouse"
catalog: "dev_catalog"
schema: "dev_schema"
prod:
mode: production
workspace:
host: https://prod-workspace.cloud.databricks.com
variables:
warehouse_id: "xyz789_prod_warehouse"
catalog: "prod_catalog"
schema: "prod_schema"
permissions:
- user_name: prod-admins@company.com
level: CAN_MANAGE
Key properties to know about:
- warehouse_id: overrides the SQL warehouse per environment
- dataset_catalog / dataset_schema: overrides the default catalog and schema used by all dataset queries in the dashboard, so the same .lvdash.json works across environments without modifying SQL
- embed_credentials: when true, all viewers run queries using the deployer's credentials (useful for PROD so business users do not need direct table access)
- permissions: control who can view/manage the dashboard
Step 3 - Deploy:
databricks bundle deploy --target dev # deploy to DEV
databricks bundle deploy --target prod # deploy to PROD
Step 4 - Keep in sync with --watch:
If someone edits the dashboard in the DEV workspace UI, you can pull those changes back into your bundle:
databricks bundle generate dashboard --resource my_dashboard --watch
This continuously polls for changes and updates your local .lvdash.json file, which you can then commit to Git and deploy to PROD.
Documentation references:
- Bundle resources: https://docs.databricks.com/aws/en/dev-tools/bundles/resources
- Bundle examples with dashboard: https://docs.databricks.com/aws/en/dev-tools/bundles/examples
- CI/CD best practices: https://docs.databricks.com/aws/en/dev-tools/ci-cd/best-practices
- Git support for dashboards: https://docs.databricks.com/aws/en/dashboards/automate/git-support
- Bundle CLI commands (generate): https://docs.databricks.com/aws/en/dev-tools/cli/bundle-commands
PART 2: THE JSON EXPORT AND HOW IT FITS IN
The JSON export you see in the Dashboard UI (the .lvdash.json file) IS the same serialized format that DABs use. So the two approaches work together:
- UI Export: Dashboard menu -> Export -> downloads a .lvdash.json file
- DABs: "bundle generate dashboard" creates the same .lvdash.json programmatically
- UI Import: You can also import a .lvdash.json via the Dashboard menu -> Replace dashboard
For your workflow, I recommend using "bundle generate" rather than manual UI export because it also creates the YAML configuration and can be automated in CI/CD.
One gotcha: the .lvdash.json file contains hardcoded catalog/schema references in the SQL queries. The dataset_catalog and dataset_schema properties in the bundle YAML override the DEFAULT catalog/schema for the dashboard's datasets, but if your SQL queries use fully qualified names like "dev_catalog.dev_schema.my_table", those will NOT be overridden. Best practice is to use unqualified table names in your dashboard queries and let dataset_catalog/dataset_schema handle the environment routing.
Another gotcha: DABs syncs all files in the bundle directory to the workspace. If you have multiple .lvdash.json variants (e.g., one per environment), all of them get uploaded and may create extra dashboards. Keep only ONE .lvdash.json file and use variables for environment differences.
PART 3: GENIE SPACES - NOT YET IN DABS (USE THE REST API)
As of today (March 2026), Genie spaces are NOT a supported resource type in Databricks Asset Bundles. There is an open GitHub issue requesting this:
https://github.com/databricks/cli/issues/3008
There is also a community-contributed PR in progress:
https://github.com/databricks/cli/pull/4191
However, there IS a workaround using the Genie REST API, which is now in Beta/Public Preview. Here is the approach:
Step 1 - Export the Genie space configuration from DEV:
Use the Get Space API to retrieve the serialized space configuration:
GET /api/2.0/genie/spaces/<space_id>
API reference: https://docs.databricks.com/api/workspace/genie/getspace
The response includes the full space configuration: tables, instructions, sample queries, joins, filters, etc. Everything except existing conversation threads.
Step 2 - Create the space in PROD:
Use the Create Space API to create it in the target workspace:
POST /api/2.0/genie/spaces
API reference: https://docs.databricks.com/api/workspace/genie/createspace
Important notes:
- Space title and description are NOT part of the serialized configuration; you set them during creation
- If your catalog/schema names differ between DEV and PROD, you need to adjust the table references in the serialized config before creating
- For updates to an existing PROD space, use the Update Space API
Step 3 - Automate in CI/CD:
You can wrap this in a Python script or use the Databricks SDK. A simple pattern:
from databricks.sdk import WorkspaceClient
dev_client = WorkspaceClient(host="https://dev-workspace...", token="...")
prod_client = WorkspaceClient(host="https://prod-workspace...", token="...")
# Export from DEV
space = dev_client.genie.get_space(space_id="<dev_space_id>")
# Create in PROD (adjust catalog/schema if needed)
prod_client.genie.create_space(
title="My Genie Space",
description="Production genie space",
# pass serialized config from dev space
)
There is also a community tool called "SpaceOps" that wraps this into a CLI for CI/CD:
https://github.com/charotAmine/databricks-spaceops
And a reusable Genie import/export component from Databricks field engineering:
https://github.com/databricks-field-eng/reusable-ip-ai/tree/main/components/genie/genie_import_expor...
PART 4: RECOMMENDED OVERALL WORKFLOW
Here is how I would structure your DEV-to-PROD workflow:
1. Git Repository: Store your bundle configuration (databricks.yml), dashboard files (.lvdash.json), and Genie space export scripts in a single Git repo.
2. DEV Workflow:
- Develop dashboards in the DEV workspace UI
- Use "databricks bundle generate dashboard --watch" to sync changes back to your local repo
- For Genie spaces, develop in DEV workspace UI, then export via API/SDK
- Commit everything to Git
3. Approval Process:
- Use Git pull requests for review/approval before merging to main
- This gives you an audit trail of what changed and who approved it
4. PROD Deployment (via CI/CD - e.g., GitHub Actions):
- On merge to main, run:
databricks bundle deploy --target prod
- For Genie spaces, run your API-based deployment script
- Optionally add a manual approval gate in your CI/CD pipeline
5. Business User Access:
- In PROD, set embed_credentials: true on dashboards so viewers use the deployer's credentials
- Set appropriate permissions so business users can view but not edit
- For Genie spaces, configure permissions via the API or UI in PROD
Example GitHub Actions workflow:
name: Deploy to PROD
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- run: databricks bundle deploy --target prod
env:
DATABRICKS_HOST: ${{ secrets.PROD_HOST }}
DATABRICKS_TOKEN: ${{ secrets.PROD_TOKEN }}
PART 5: KEY GOTCHAS AND LIMITATIONS
1. Warehouse IDs differ between workspaces: Always parameterize warehouse_id using bundle variables. Never hardcode them.
2. Catalog/schema names: Use dataset_catalog and dataset_schema in the bundle config. Avoid fully-qualified table names in dashboard SQL when possible.
3. Dashboard file sync: DABs uploads ALL files in the bundle directory. Keep your directory structure clean with only one .lvdash.json per dashboard.
4. Dev mode naming: In development mode, DABs prepends "[dev username]" to resource names. This helps avoid conflicts but means the dashboard has a different display name in DEV vs PROD.
5. Genie spaces are manual for now: Until native DABs support ships, you need a separate script or tool for Genie space promotion. The Genie Management APIs are in Beta, so there may be breaking changes.
6. Git folder limit: If using Git folders for dashboards, there is a limit of 100 dashboards per Git folder.
7. Publishing: Deploying a dashboard with DABs automatically publishes it. You do not need a separate publish step.
8. Cross-workspace deployment from UI: The "rocket button" deployment in the workspace UI does NOT support deploying to a different workspace. You must use the CLI for cross-workspace deployment.
I hope this helps you set up a solid workflow. The dashboard side with DABs is mature and production-ready. The Genie side requires the API-based workaround for now, but native DABs support is actively being worked on.
Best regards
a month ago
@SteveOstrowski do you know whether setting dataset_catalog and dataset_schema in the dashboard DAB resource is supported for metric views, or if not already when we could expect to see this feature?
I'm asking because we are able to make it work for queries using the queryLines property, however when we use the asset_name property it stops working and requiring a fully qualified name.
As soon as we change from a fully qualified name "catalog.schema.table_name" to "table_name" we receive the following exception in databricks dashboard:
[TABLE_OR_VIEW_NOT_FOUND] The table or view ``.``.`` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog. To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01; line 1 pos 2198
a month ago
Hi ATN,
Good question. Based on what I know, dataset_catalog and dataset_schema in the dashboard DAB resource currently apply to datasets that use SQL queries (i.e., the queryLines property), but they do not resolve for datasets that reference metric views via the asset_name property. That lines up with the behavior you are seeing — it works with queryLines but fails with asset_name and an unqualified name.
Metric views in general require fully qualified three-part names (catalog.schema.metric_view_). The dataset_catalog/dataset_schema substitution does not currently get applied when the dashboard resolves an asset_name reference, which is why you get the TABLE_OR_VIEW_NOT_FOUND error with the empty backticks (``.``.``) — it is not injecting the catalog/schema values at all.
For now, the workaround is to use the fully qualified name in the asset_name property (e.g., catalog.schema.table_name). If you need environment-specific routing, you could use bundle variable substitution to parameterize the catalog/schema portions of the fully qualified name. I do not have a confirmed timeline for when dataset_catalog/dataset_schema will be extended to cover asset_name references, but it is a known gap.
Sources:
a month ago
Thanks Steve for the quick reply and the confirmation. I would have wished to see the lack of support for metric view to be better documented 🙂