cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Recommended local development workflow for dashboard CI/CD with environment-specific catalog/schema?

playnicekids
New Contributor II

Hi all,

I’m trying to implement CI/CD for Databricks AI/BI dashboards using Declarative Automation Bundles, following guidance published by Databricks.

The documentation recommends exporting dashboards as .lvdash.json using databricks bundle generate, storing the .lvdash.json files in Git, deploying with databricks bundle deploy, and using variables to parameterize SQL warehouses or data sources across dev, UAT, and prod.

Our dashboard resource is roughly:

resources:
dashboards:
my_dashboard:
display_name: "My Dashboard"
file_path: ../dashboards/my_dashboard.lvdash.json
warehouse_id: ${var.warehouse_id}
dataset_catalog: ${var.dashboard_catalog}
dataset_schema: ${var.dashboard_schema}

Each target then supplies different values:

targets:
dev:
variables:
dashboard_catalog: dev_catalog
dashboard_schema: curated

prod:
variables:
dashboard_catalog: prod_catalog
dashboard_schema: curated

The dashboard JSON contains metric view datasets. If the exported JSON contains fully-qualified asset names, for example:

"asset_name": "prod_catalog.curated.my_metric_view"

then the dashboard is not portable across environments without editing the JSON.

If we remove the catalog/schema from the JSON and use:

"asset_name": "my_metric_view"

then this works conceptually with the bundle-level dataset_catalog and dataset_schema during deployment, but local/workspace development becomes awkward. Opening or editing the dashboard JSON directly from the repo/workspace can fail with TABLE_OR_VIEW_NOT_FOUND, because the unqualified metric view cannot be resolved outside the deployed bundle context.

So my question is: what is the recommended development workflow for dashboard developers when using environment-specific catalogs and schemas?

Specifically:

Should .lvdash.json files contain unqualified dataset or metric view names and only be expected to work after bundle deploy?

Is direct dashboard editing from the repo/workspace expected to be unsupported or unreliable in this pattern?

Should teams maintain dev-qualified JSON in source and patch or generate environment-specific JSON during CI/CD, or is that considered an anti-pattern?

Is there currently any way to parameterize asset_name directly inside .lvdash.json, or are dataset_catalog and dataset_schema the intended mechanism?

In short, the CI/CD story is clear once the bundle is deployed, but the local iterative development story is less clear when the JSON cannot safely include a fixed catalog/schema.

2 REPLIES 2

KrisJohannesen
Contributor

This seems a bit counterintuitive I completely agree. As far as I am aware, there is no other way of doing this than committing to the DABs approach or not. So for local development that would mean doing a DAB deploy every time you need to update your dashboard - which I agree removes the UI based editing.

Just for clarity for others that might read this - the two approach look like this.

  1. The top one is the one that fits the DAB style with replace on the different environments
  2. The bottom one is the one that works natively in the UI
  "datasets": [
    {
      "name": "8b55b470",
      "displayName": "budget",
      "asset_name": "budget",
      "catalog": "ai_bi_dev",
      "schema": "metric_view"
    },
    {
      "name": "f8f5ce98",
      "displayName": "budget",
      "asset_name": "ai_bi.metric_view.budget"
    }
  ],
 
Hoping someone from Databricks can clarify if this is a bug or expected behavior. Do note that the parameters for default catalog and schema are quite new - so I imagine this will probably work with a future update

stbjelcevic
Databricks Employee
Databricks Employee

hi @playnicekids ,

You've hit a known dev-UX gap. dataset_catalog and dataset_schema on the dashboard resource are the intended parameterization mechanism, but they only resolve at bundle deploy time, which is why workspace editing of an unqualified JSON fails with TABLE_OR_VIEW_NOT_FOUND.

Quick answers to your four:

  1. Yes. The deploy-target .lvdash.json should hold unqualified asset_name values and is only expected to resolve after bundle deploy.
  2. Effectively yes. Direct workspace editing of the unqualified file isn't reliable. Treat it as generated output and mark it "do not hand-edit."
  3. Not an anti-pattern. Maintaining a dev-qualified JSON in source and transforming it during CI is the recommended workflow today.
  4. No way to parameterize asset_name inside the JSON body. dataset_catalog / dataset_schema at the bundle resource level are the only knobs.

Two patterns that work in practice:

  • Two-file: keep my_dashboard.dev.lvdash.json (dev-qualified, what developers iterate on in the workspace) and generate my_dashboard.lvdash.json (unqualified, deploy-target) in CI before bundle deploy. Simple, clear ownership.
  • Programmatic: load a template JSON, inject catalog/schema per environment, and publish via the Lakeview SDK (link) (lakeview.create / lakeview.publish). The bundle just orchestrates a notebook. More control, less reliance on file-naming convention.

If you want pure-bundle with no transform step, you can run a single unqualified file + bundle vars, but you give up workspace previewing. Every iteration goes through bundle deploy -t dev.

Until workspace-native editing respects bundle context, the transform-in-CI pattern is the practical answer.