<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Genie / Dashboard Workflow in Warehousing &amp; Analytics</title>
    <link>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/149980#M2519</link>
    <description>&lt;P&gt;I’m currently working with two workspaces – one for DEV and one for PROD.&lt;BR /&gt;I’m trying to understand how I can keep the Genie/Dashboard functionalities in sync (mirrored) between these two environments. What is the best way to organize this workflow?&lt;/P&gt;&lt;P&gt;Ideally, I’d like to develop and test new functionalities in the DEV workspace and then deploy them to PROD once they’re approved. Will an asset bundle also include and migrate the Genie/Dashboard structure from one workspace to another?&lt;/P&gt;&lt;P&gt;I noticed that from the Dashboard section you can export the underlying code as a JSON file, but I’m not sure how this fits into a proper mirroring strategy between DEV and PROD. Our goal is for business users to have access in PROD only to approved dashboards, while we continue working in DEV on optimizations, changes, and updates.&lt;/P&gt;&lt;P&gt;How would you recommend structuring this workflow in Databricks to manage Genie/Dashboard development in DEV and controlled promotion to PROD?&lt;/P&gt;</description>
    <pubDate>Fri, 06 Mar 2026 10:32:55 GMT</pubDate>
    <dc:creator>Stanciu_Cristi</dc:creator>
    <dc:date>2026-03-06T10:32:55Z</dc:date>
    <item>
      <title>Genie / Dashboard Workflow</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/149980#M2519</link>
      <description>&lt;P&gt;I’m currently working with two workspaces – one for DEV and one for PROD.&lt;BR /&gt;I’m trying to understand how I can keep the Genie/Dashboard functionalities in sync (mirrored) between these two environments. What is the best way to organize this workflow?&lt;/P&gt;&lt;P&gt;Ideally, I’d like to develop and test new functionalities in the DEV workspace and then deploy them to PROD once they’re approved. Will an asset bundle also include and migrate the Genie/Dashboard structure from one workspace to another?&lt;/P&gt;&lt;P&gt;I noticed that from the Dashboard section you can export the underlying code as a JSON file, but I’m not sure how this fits into a proper mirroring strategy between DEV and PROD. Our goal is for business users to have access in PROD only to approved dashboards, while we continue working in DEV on optimizations, changes, and updates.&lt;/P&gt;&lt;P&gt;How would you recommend structuring this workflow in Databricks to manage Genie/Dashboard development in DEV and controlled promotion to PROD?&lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2026 10:32:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/149980#M2519</guid>
      <dc:creator>Stanciu_Cristi</dc:creator>
      <dc:date>2026-03-06T10:32:55Z</dc:date>
    </item>
    <item>
      <title>Re: Genie / Dashboard Workflow</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150048#M2520</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/207545"&gt;@Stanciu_Cristi&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Great question - this is a very common pattern (DEV to PROD promotion) and Databricks has solid support for dashboards, with Genie space support still catching up. Let me break this down comprehensively.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;PART 1: DASHBOARDS - DATABRICKS ASSET BUNDLES (FULLY SUPPORTED)&lt;/P&gt;
&lt;P&gt;Yes, Databricks Asset Bundles (DABs) fully support AI/BI dashboards as a managed resource. This is the recommended approach for your DEV-to-PROD workflow.&lt;/P&gt;
&lt;P&gt;Here is the end-to-end workflow:&lt;/P&gt;
&lt;P&gt;Step 1 - Export your existing dashboard into a bundle definition:&lt;/P&gt;
&lt;P&gt;databricks bundle generate dashboard \&lt;BR /&gt;--existing-id &amp;lt;dashboard-id&amp;gt; \&lt;BR /&gt;--bind&lt;/P&gt;
&lt;P&gt;This creates two things:&lt;BR /&gt;- A YAML resource definition (e.g., my_dashboard.dashboard.yml)&lt;BR /&gt;- The serialized dashboard file (e.g., my_dashboard.lvdash.json)&lt;/P&gt;
&lt;P&gt;The --bind flag links the generated config to the existing dashboard so you do not create a duplicate on deploy.&lt;/P&gt;
&lt;P&gt;Step 2 - Configure your bundle with targets for DEV and PROD:&lt;/P&gt;
&lt;P&gt;bundle:&lt;BR /&gt;name: my-dashboard-bundle&lt;/P&gt;
&lt;P&gt;variables:&lt;BR /&gt;warehouse_id:&lt;BR /&gt;description: "SQL Warehouse ID"&lt;BR /&gt;catalog:&lt;BR /&gt;description: "Target catalog"&lt;BR /&gt;schema:&lt;BR /&gt;description: "Target schema"&lt;/P&gt;
&lt;P&gt;resources:&lt;BR /&gt;dashboards:&lt;BR /&gt;my_dashboard:&lt;BR /&gt;display_name: "Sales Dashboard"&lt;BR /&gt;file_path: src/my_dashboard.lvdash.json&lt;BR /&gt;warehouse_id: ${var.warehouse_id}&lt;BR /&gt;dataset_catalog: ${var.catalog}&lt;BR /&gt;dataset_schema: ${var.schema}&lt;BR /&gt;embed_credentials: true&lt;/P&gt;
&lt;P&gt;targets:&lt;BR /&gt;dev:&lt;BR /&gt;mode: development&lt;BR /&gt;default: true&lt;BR /&gt;workspace:&lt;BR /&gt;host: &lt;A href="https://dev-workspace.cloud.databricks.com" target="_blank"&gt;https://dev-workspace.cloud.databricks.com&lt;/A&gt;&lt;BR /&gt;variables:&lt;BR /&gt;warehouse_id: "abc123_dev_warehouse"&lt;BR /&gt;catalog: "dev_catalog"&lt;BR /&gt;schema: "dev_schema"&lt;/P&gt;
&lt;P&gt;prod:&lt;BR /&gt;mode: production&lt;BR /&gt;workspace:&lt;BR /&gt;host: &lt;A href="https://prod-workspace.cloud.databricks.com" target="_blank"&gt;https://prod-workspace.cloud.databricks.com&lt;/A&gt;&lt;BR /&gt;variables:&lt;BR /&gt;warehouse_id: "xyz789_prod_warehouse"&lt;BR /&gt;catalog: "prod_catalog"&lt;BR /&gt;schema: "prod_schema"&lt;BR /&gt;permissions:&lt;BR /&gt;- user_name: prod-admins@company.com&lt;BR /&gt;level: CAN_MANAGE&lt;/P&gt;
&lt;P&gt;Key properties to know about:&lt;BR /&gt;- warehouse_id: overrides the SQL warehouse per environment&lt;BR /&gt;- dataset_catalog / dataset_schema: overrides the default catalog and schema used by all dataset queries in the dashboard, so the same .lvdash.json works across environments without modifying SQL&lt;BR /&gt;- embed_credentials: when true, all viewers run queries using the deployer's credentials (useful for PROD so business users do not need direct table access)&lt;BR /&gt;- permissions: control who can view/manage the dashboard&lt;/P&gt;
&lt;P&gt;Step 3 - Deploy:&lt;/P&gt;
&lt;P&gt;databricks bundle deploy --target dev # deploy to DEV&lt;BR /&gt;databricks bundle deploy --target prod # deploy to PROD&lt;/P&gt;
&lt;P&gt;Step 4 - Keep in sync with --watch:&lt;/P&gt;
&lt;P&gt;If someone edits the dashboard in the DEV workspace UI, you can pull those changes back into your bundle:&lt;/P&gt;
&lt;P&gt;databricks bundle generate dashboard --resource my_dashboard --watch&lt;/P&gt;
&lt;P&gt;This continuously polls for changes and updates your local .lvdash.json file, which you can then commit to Git and deploy to PROD.&lt;/P&gt;
&lt;P&gt;Documentation references:&lt;BR /&gt;- Bundle resources: &lt;A href="https://docs.databricks.com/aws/en/dev-tools/bundles/resources" target="_blank"&gt;https://docs.databricks.com/aws/en/dev-tools/bundles/resources&lt;/A&gt;&lt;BR /&gt;- Bundle examples with dashboard: &lt;A href="https://docs.databricks.com/aws/en/dev-tools/bundles/examples" target="_blank"&gt;https://docs.databricks.com/aws/en/dev-tools/bundles/examples&lt;/A&gt;&lt;BR /&gt;- CI/CD best practices: &lt;A href="https://docs.databricks.com/aws/en/dev-tools/ci-cd/best-practices" target="_blank"&gt;https://docs.databricks.com/aws/en/dev-tools/ci-cd/best-practices&lt;/A&gt;&lt;BR /&gt;- Git support for dashboards: &lt;A href="https://docs.databricks.com/aws/en/dashboards/automate/git-support" target="_blank"&gt;https://docs.databricks.com/aws/en/dashboards/automate/git-support&lt;/A&gt;&lt;BR /&gt;- Bundle CLI commands (generate): &lt;A href="https://docs.databricks.com/aws/en/dev-tools/cli/bundle-commands" target="_blank"&gt;https://docs.databricks.com/aws/en/dev-tools/cli/bundle-commands&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;PART 2: THE JSON EXPORT AND HOW IT FITS IN&lt;/P&gt;
&lt;P&gt;The JSON export you see in the Dashboard UI (the .lvdash.json file) IS the same serialized format that DABs use. So the two approaches work together:&lt;/P&gt;
&lt;P&gt;- UI Export: Dashboard menu -&amp;gt; Export -&amp;gt; downloads a .lvdash.json file&lt;BR /&gt;- DABs: "bundle generate dashboard" creates the same .lvdash.json programmatically&lt;BR /&gt;- UI Import: You can also import a .lvdash.json via the Dashboard menu -&amp;gt; Replace dashboard&lt;/P&gt;
&lt;P&gt;For your workflow, I recommend using "bundle generate" rather than manual UI export because it also creates the YAML configuration and can be automated in CI/CD.&lt;/P&gt;
&lt;P&gt;One gotcha: the .lvdash.json file contains hardcoded catalog/schema references in the SQL queries. The dataset_catalog and dataset_schema properties in the bundle YAML override the DEFAULT catalog/schema for the dashboard's datasets, but if your SQL queries use fully qualified names like "dev_catalog.dev_schema.my_table", those will NOT be overridden. Best practice is to use unqualified table names in your dashboard queries and let dataset_catalog/dataset_schema handle the environment routing.&lt;/P&gt;
&lt;P&gt;Another gotcha: DABs syncs all files in the bundle directory to the workspace. If you have multiple .lvdash.json variants (e.g., one per environment), all of them get uploaded and may create extra dashboards. Keep only ONE .lvdash.json file and use variables for environment differences.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;PART 3: GENIE SPACES - NOT YET IN DABS (USE THE REST API)&lt;/P&gt;
&lt;P&gt;As of today (March 2026), Genie spaces are NOT a supported resource type in Databricks Asset Bundles. There is an open GitHub issue requesting this:&lt;BR /&gt;&lt;A href="https://github.com/databricks/cli/issues/3008" target="_blank"&gt;https://github.com/databricks/cli/issues/3008&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;There is also a community-contributed PR in progress:&lt;BR /&gt;&lt;A href="https://github.com/databricks/cli/pull/4191" target="_blank"&gt;https://github.com/databricks/cli/pull/4191&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;However, there IS a workaround using the Genie REST API, which is now in Beta/Public Preview. Here is the approach:&lt;/P&gt;
&lt;P&gt;Step 1 - Export the Genie space configuration from DEV:&lt;/P&gt;
&lt;P&gt;Use the Get Space API to retrieve the serialized space configuration:&lt;BR /&gt;GET /api/2.0/genie/spaces/&amp;lt;space_id&amp;gt;&lt;/P&gt;
&lt;P&gt;API reference: &lt;A href="https://docs.databricks.com/api/workspace/genie/getspace" target="_blank"&gt;https://docs.databricks.com/api/workspace/genie/getspace&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;The response includes the full space configuration: tables, instructions, sample queries, joins, filters, etc. Everything except existing conversation threads.&lt;/P&gt;
&lt;P&gt;Step 2 - Create the space in PROD:&lt;/P&gt;
&lt;P&gt;Use the Create Space API to create it in the target workspace:&lt;BR /&gt;POST /api/2.0/genie/spaces&lt;/P&gt;
&lt;P&gt;API reference: &lt;A href="https://docs.databricks.com/api/workspace/genie/createspace" target="_blank"&gt;https://docs.databricks.com/api/workspace/genie/createspace&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Important notes:&lt;BR /&gt;- Space title and description are NOT part of the serialized configuration; you set them during creation&lt;BR /&gt;- If your catalog/schema names differ between DEV and PROD, you need to adjust the table references in the serialized config before creating&lt;BR /&gt;- For updates to an existing PROD space, use the Update Space API&lt;/P&gt;
&lt;P&gt;Step 3 - Automate in CI/CD:&lt;/P&gt;
&lt;P&gt;You can wrap this in a Python script or use the Databricks SDK. A simple pattern:&lt;/P&gt;
&lt;P&gt;from databricks.sdk import WorkspaceClient&lt;/P&gt;
&lt;P&gt;dev_client = WorkspaceClient(host="&lt;A href="https://dev-workspace" target="_blank"&gt;https://dev-workspace&lt;/A&gt;...", token="...")&lt;BR /&gt;prod_client = WorkspaceClient(host="&lt;A href="https://prod-workspace" target="_blank"&gt;https://prod-workspace&lt;/A&gt;...", token="...")&lt;/P&gt;
&lt;P&gt;# Export from DEV&lt;BR /&gt;space = dev_client.genie.get_space(space_id="&amp;lt;dev_space_id&amp;gt;")&lt;/P&gt;
&lt;P&gt;# Create in PROD (adjust catalog/schema if needed)&lt;BR /&gt;prod_client.genie.create_space(&lt;BR /&gt;title="My Genie Space",&lt;BR /&gt;description="Production genie space",&lt;BR /&gt;# pass serialized config from dev space&lt;BR /&gt;)&lt;/P&gt;
&lt;P&gt;There is also a community tool called "SpaceOps" that wraps this into a CLI for CI/CD:&lt;BR /&gt;&lt;A href="https://github.com/charotAmine/databricks-spaceops" target="_blank"&gt;https://github.com/charotAmine/databricks-spaceops&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;And a reusable Genie import/export component from Databricks field engineering:&lt;BR /&gt;&lt;A href="https://github.com/databricks-field-eng/reusable-ip-ai/tree/main/components/genie/genie_import_export" target="_blank"&gt;https://github.com/databricks-field-eng/reusable-ip-ai/tree/main/components/genie/genie_import_export&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;PART 4: RECOMMENDED OVERALL WORKFLOW&lt;/P&gt;
&lt;P&gt;Here is how I would structure your DEV-to-PROD workflow:&lt;/P&gt;
&lt;P&gt;1. Git Repository: Store your bundle configuration (databricks.yml), dashboard files (.lvdash.json), and Genie space export scripts in a single Git repo.&lt;/P&gt;
&lt;P&gt;2. DEV Workflow:&lt;BR /&gt;- Develop dashboards in the DEV workspace UI&lt;BR /&gt;- Use "databricks bundle generate dashboard --watch" to sync changes back to your local repo&lt;BR /&gt;- For Genie spaces, develop in DEV workspace UI, then export via API/SDK&lt;BR /&gt;- Commit everything to Git&lt;/P&gt;
&lt;P&gt;3. Approval Process:&lt;BR /&gt;- Use Git pull requests for review/approval before merging to main&lt;BR /&gt;- This gives you an audit trail of what changed and who approved it&lt;/P&gt;
&lt;P&gt;4. PROD Deployment (via CI/CD - e.g., GitHub Actions):&lt;BR /&gt;- On merge to main, run:&lt;BR /&gt;databricks bundle deploy --target prod&lt;BR /&gt;- For Genie spaces, run your API-based deployment script&lt;BR /&gt;- Optionally add a manual approval gate in your CI/CD pipeline&lt;/P&gt;
&lt;P&gt;5. Business User Access:&lt;BR /&gt;- In PROD, set embed_credentials: true on dashboards so viewers use the deployer's credentials&lt;BR /&gt;- Set appropriate permissions so business users can view but not edit&lt;BR /&gt;- For Genie spaces, configure permissions via the API or UI in PROD&lt;/P&gt;
&lt;P&gt;Example GitHub Actions workflow:&lt;/P&gt;
&lt;P&gt;name: Deploy to PROD&lt;BR /&gt;on:&lt;BR /&gt;push:&lt;BR /&gt;branches: [main]&lt;BR /&gt;jobs:&lt;BR /&gt;deploy:&lt;BR /&gt;runs-on: ubuntu-latest&lt;BR /&gt;steps:&lt;BR /&gt;- uses: actions/checkout@v4&lt;BR /&gt;- uses: databricks/setup-cli@main&lt;BR /&gt;- run: databricks bundle deploy --target prod&lt;BR /&gt;env:&lt;BR /&gt;DATABRICKS_HOST: ${{ secrets.PROD_HOST }}&lt;BR /&gt;DATABRICKS_TOKEN: ${{ secrets.PROD_TOKEN }}&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;PART 5: KEY GOTCHAS AND LIMITATIONS&lt;/P&gt;
&lt;P&gt;1. Warehouse IDs differ between workspaces: Always parameterize warehouse_id using bundle variables. Never hardcode them.&lt;/P&gt;
&lt;P&gt;2. Catalog/schema names: Use dataset_catalog and dataset_schema in the bundle config. Avoid fully-qualified table names in dashboard SQL when possible.&lt;/P&gt;
&lt;P&gt;3. Dashboard file sync: DABs uploads ALL files in the bundle directory. Keep your directory structure clean with only one .lvdash.json per dashboard.&lt;/P&gt;
&lt;P&gt;4. Dev mode naming: In development mode, DABs prepends "[dev username]" to resource names. This helps avoid conflicts but means the dashboard has a different display name in DEV vs PROD.&lt;/P&gt;
&lt;P&gt;5. Genie spaces are manual for now: Until native DABs support ships, you need a separate script or tool for Genie space promotion. The Genie Management APIs are in Beta, so there may be breaking changes.&lt;/P&gt;
&lt;P&gt;6. Git folder limit: If using Git folders for dashboards, there is a limit of 100 dashboards per Git folder.&lt;/P&gt;
&lt;P&gt;7. Publishing: Deploying a dashboard with DABs automatically publishes it. You do not need a separate publish step.&lt;/P&gt;
&lt;P&gt;8. Cross-workspace deployment from UI: The "rocket button" deployment in the workspace UI does NOT support deploying to a different workspace. You must use the CLI for cross-workspace deployment.&lt;/P&gt;
&lt;P&gt;I hope this helps you set up a solid workflow. The dashboard side with DABs is mature and production-ready. The Genie side requires the API-based workaround for now, but native DABs support is actively being worked on.&lt;/P&gt;
&lt;P&gt;Best regards&lt;/P&gt;</description>
      <pubDate>Sat, 07 Mar 2026 00:57:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150048#M2520</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-07T00:57:02Z</dc:date>
    </item>
    <item>
      <title>Re: Genie / Dashboard Workflow</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150970#M2534</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/133188"&gt;@SteveOstrowski&lt;/a&gt;&amp;nbsp;do you know whether setting dataset_catalog and dataset_schema in the dashboard DAB resource is supported for metric views, or if not already when we could expect to see this feature?&lt;/P&gt;&lt;P&gt;I'm asking because we are able to make it work for queries using the queryLines property, however when we use the asset_name property it stops working and requiring a fully qualified name.&lt;/P&gt;&lt;P&gt;As soon as we change from a fully qualified name "catalog.schema.table_name" to "table_name" we receive the following exception in databricks dashboard:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;[TABLE_OR_VIEW_NOT_FOUND] The table or view ``.``.`` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog. To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01; line 1 pos 2198&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 15 Mar 2026 10:30:34 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150970#M2534</guid>
      <dc:creator>ATN</dc:creator>
      <dc:date>2026-03-15T10:30:34Z</dc:date>
    </item>
    <item>
      <title>Re: Genie / Dashboard Workflow</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150983#M2535</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi ATN,&lt;/P&gt;
&lt;P&gt;Good question. Based on what I know,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;dataset_catalog&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;dataset_schema&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the dashboard DAB resource currently apply to datasets that use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;SQL queries&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(i.e., the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;queryLines&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property), but they do&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;not&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;resolve for datasets that reference metric views via the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;asset_name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property. That lines up with the behavior you are seeing — it works with&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;queryLines&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;but fails with&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;asset_name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and an unqualified name.&lt;/P&gt;
&lt;P&gt;Metric views in general require fully qualified three-part names (&lt;CODE&gt;catalog.schema.metric_view_&lt;WBR /&gt;name&lt;/CODE&gt;). The&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;dataset_catalog&lt;/CODE&gt;/&lt;CODE&gt;dataset_schema&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;substitution does not currently get applied when the dashboard resolves an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;asset_name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;reference, which is why you get the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;TABLE_OR_VIEW_NOT_FOUND&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;error with the empty backticks (&lt;CODE&gt;``.``.``&lt;/CODE&gt;) — it is not injecting the catalog/schema values at all.&lt;/P&gt;
&lt;P&gt;For now, the workaround is to use the fully qualified name in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;asset_name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property (e.g.,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;catalog.schema.table_name&lt;/CODE&gt;). If you need environment-specific routing, you could use bundle variable substitution to parameterize the catalog/schema portions of the fully qualified name. I do not have a confirmed timeline for when&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;dataset_catalog&lt;/CODE&gt;/&lt;CODE&gt;dataset_schema&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;will be extended to cover&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;asset_name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;references, but it is a known gap.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sources:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/bundles/resources" data-saferedirecturl="https://www.google.com/url?q=https://docs.databricks.com/aws/en/dev-tools/bundles/resources&amp;amp;source=gmail&amp;amp;ust=1773680931870000&amp;amp;usg=AOvVaw1gx-8yDp2yomSu4nVeBfkX" target="_blank"&gt;Databricks Asset Bundles resources documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/databricks/cli/pull/4130" data-saferedirecturl="https://www.google.com/url?q=https://github.com/databricks/cli/pull/4130&amp;amp;source=gmail&amp;amp;ust=1773680931870000&amp;amp;usg=AOvVaw2roUF86I3jiVVWvCNSaC80" target="_blank"&gt;CLI PR #4130 - Add support for configurable catalog/schema for dashboards&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Sun, 15 Mar 2026 17:09:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150983#M2535</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-15T17:09:40Z</dc:date>
    </item>
    <item>
      <title>Re: Genie / Dashboard Workflow</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150984#M2536</link>
      <description>&lt;P&gt;Thanks Steve for the quick reply and the confirmation. I would have wished to see the lack of support for metric view to be better documented &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 15 Mar 2026 17:36:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/genie-dashboard-workflow/m-p/150984#M2536</guid>
      <dc:creator>ATN</dc:creator>
      <dc:date>2026-03-15T17:36:28Z</dc:date>
    </item>
  </channel>
</rss>

