cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to import a Lakeview Dashboard programmatically (API or CLI)?

Gutek
New Contributor II

I'm trying to import a Lakeview Dashboard that I've originally exported through the CLI (version 0.213.0). The exported file has extension .lvdash.json and is a single line json file.

I can't get it to work, I tried this command:

 

databricks workspace import /Users/my@user.com/my_first_lakeview_dashboard.lvdash.json --profile prd --file dashboard.json

 

but I get: "Error: The zip file may not be valid or may be an unsupported version. Hint: Objects imported using format=SOURCE are expected to be zip encoded databricks source notebook(s) by default. Please specify a language using the --language flag if you are trying to import a single uncompressed notebook"

There is a language flag but the only options available are: [PYTHON, R, SCALA, SQL].

There is json flag but it's only for providing the request in JSON format.

There is no mention of Lakeview dashboards in this API. Is this even supported? Or do I have the syntax wrong? The Lakeview API only supports publishing.

1 ACCEPTED SOLUTION

Accepted Solutions

yegorski
New Contributor III

Turns out the problem was between keyboard and chair! The issues was with using requests data vs json. Here's the full working code:

 

import base64
import json
import os
import requests
 
WORKSPACE_DASHBOARDS_FOLDER = "/Workspace/Engineering Metrics/SQL Warehouse Monitoring"

with open("./dashboard_template.json", "r") as file:
DASHBOARD_TEMPLATE = json.load(file)


class Databricks:
def __init__(self, API_KEY):
self.api_key = API_KEY
self.headers = (
{
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Authorization: Bearer {self.api_key}",
},
)

def make_request(self, method, query, **kwargs):
response = requests.request(
method, f"{self.url}{query}", headers=self.headers[0], **kwargs
)
response.raise_for_status()
return response

def list_serverless_warehouses(self):
response = self.make_request("get", f"/sql/warehouses").json()["warehouses"]
warehouses = []
for warehouse in response:
if warehouse["enable_serverless_compute"] == True:
warehouses.append(
{
"id": warehouse["id"],
"name": warehouse["name"],
}
)
return warehouses

def create_dashboard(self, warehouse_id: str, warehouse_name: str):
dashboard_config = DASHBOARD_TEMPLATE
for i, v in enumerate(dashboard_config["datasets"]):
dashboard_config["datasets"][i]["parameters"][0]["defaultSelection"][
"values"
]["values"][0]["value"] = warehouse_id
request_data = {
"path": f"{WORKSPACE_DASHBOARDS_FOLDER}/{warehouse_name}.lvdash.json",
"content": base64.b64encode(
json.dumps(dashboard_config).encode("utf-8")
).decode("utf-8"),
"format": "AUTO",
"overwrite": "true",
}
return self.make_request("post", f"/workspace/import", json=request_data)


databricks = Databricks(os.getenv("DATABRICKS_TOKEN_PREMIUM"))

warehouses = databricks.list_serverless_warehouses()
 
for warehouse in warehouses:
databricks.create_dashboard(warehouse["id"], warehouse["name"])

 

View solution in original post

4 REPLIES 4

miranda_luna_db
Databricks Employee
Databricks Employee

Thanks for flagging. There should be enhanced API documentation specific to Lakeview in the next week or two (PR is in review). Keep an eye out for a page called "Use the Lakeview API and Workspace API to create and manage Lakeview dashboards."

Currently, there is API support for Lakeview dashboard:

  • Export/Import (via workspace APIs)
  • List (via workspace APIs)
  • Get Status (via workspace APIs)
  • Publish

Additional areas are under development.

For a bit more information on Lakeview dashboard export/import via API specifically....

Exporting a dashboard

If we want to export the contents of mydashboard.lvdash.json, we can use the Workspace Export API to do so.

 

 

GET /api/2.0/workspace/export

Query parameters: 
{
	"path": "/Users/my@user.com/examples/mydashboard.lvdash.json",
	"direct_download": true
}

Response:
{
	"pages": [
		{
			"name": "7db2d3cf",
			"displayName": "New Page"
		}
	]
}

This response shows the contents of a minimal dashboard definition which is blank. 

Note that the file name without extension is what appears as the name of the dashboard in the Workspace (mydashboard.lvdash.json). If the direct_download property is left out of the request or is set to false, the response will include the base64 encoded version of the json string. We will use this later for the import request.

GET /api/2.0/workspace/export

Query parameters: 
{
	"path": "/Users/my@user.com/examples/mydashboard.lvdash.json",
	"direct_download": false
}

Response:
{
	"content": "eyJwYWdlcyI6W3sibmFtZSI6IjdkYjJkM2NmIiwiZGlzcGxheU5hbWUiOiJOZXcgUGFnZSJ9XX0=",
	"file_type": "lvdash.json"
}

Importing a dashboard

If we want to import another dashboard into the workspace using the same contents as mydashboard.lvdash.json, we can do so by using the Workspace Import API. For the import to be properly recognized as a Lakeview dashboard, a few important parameters must be set:

  • "format": "AUTO" - this setting will allow the system to automatically detect the asset type.
  • "path": needs the file path to end in ".lvdash.json". This in conjunction with the format will mean the content will attempt to import as a Lakeview dashboard.

If these settings are not configured properly, the import might succeed but the dashboard would be treated like a normal file.

POST /api/2.0/workspace/import

Request body parameters:
{
	"path": "/Users/my@user.com/examples/myseconddashboard.lvdash.json",
	"content": "eyJwYWdlcyI6W3sibmFtZSI6IjdkYjJkM2NmIiwiZGlzcGxheU5hbWUiOiJOZXcgUGFnZSJ9XX0=",
	"format": "AUTO"
}

Response:
{}

If we were to immediately issue the same API request, we would get an error:

{
	"error_code": "RESOURCE_ALREADY_EXISTS",
	"message": "Path (/Users/my@user.com/examples/myseconddashboard.lvdash.json) already exists."
}

In order to overwrite the contents of an existing dashboard in place, the "overwrite" property can be set:

POST /api/2.0/workspace/import

Request body parameters:
{
	"path": "/Users/my@user.com/examples/myseconddashboard.lvdash.json",
	"content": "eyJwYWdlcyI6W3sibmFtZSI6IjdkYjJkM2NmIiwiZGlzcGxheU5hbWUiOiJOZXcgUGFnZSJ9XX0=",
	"format": "AUTO",
	"overwrite": true
}

Response:
{}

Thanks for the very detailed response. This is a fantastic example of a detailed support response.

We have a problem with the import API. No matter the combination of content parameter (base64, pure JSON, json dump, str, etc.) the API always returns

b'{"error_code":"MALFORMED_REQUEST","message":"Invalid JSON given in the body of the request - failed to parse given JSON"}\n'

Here's the code we're using: iterate over all our warehouses and create a dashboard for each. The JSON dashboard file is taken straight from 

https://github.com/CodyAustinDavis/dbsql_sme/blob/main/Observability%20Dashboards%20and%20DBA%20Reso...

Has there been an update to the API to make Lakeview Dashboards work more smoothly?

yegorski
New Contributor III

Turns out the problem was between keyboard and chair! The issues was with using requests data vs json. Here's the full working code:

 

import base64
import json
import os
import requests
 
WORKSPACE_DASHBOARDS_FOLDER = "/Workspace/Engineering Metrics/SQL Warehouse Monitoring"

with open("./dashboard_template.json", "r") as file:
DASHBOARD_TEMPLATE = json.load(file)


class Databricks:
def __init__(self, API_KEY):
self.api_key = API_KEY
self.headers = (
{
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Authorization: Bearer {self.api_key}",
},
)

def make_request(self, method, query, **kwargs):
response = requests.request(
method, f"{self.url}{query}", headers=self.headers[0], **kwargs
)
response.raise_for_status()
return response

def list_serverless_warehouses(self):
response = self.make_request("get", f"/sql/warehouses").json()["warehouses"]
warehouses = []
for warehouse in response:
if warehouse["enable_serverless_compute"] == True:
warehouses.append(
{
"id": warehouse["id"],
"name": warehouse["name"],
}
)
return warehouses

def create_dashboard(self, warehouse_id: str, warehouse_name: str):
dashboard_config = DASHBOARD_TEMPLATE
for i, v in enumerate(dashboard_config["datasets"]):
dashboard_config["datasets"][i]["parameters"][0]["defaultSelection"][
"values"
]["values"][0]["value"] = warehouse_id
request_data = {
"path": f"{WORKSPACE_DASHBOARDS_FOLDER}/{warehouse_name}.lvdash.json",
"content": base64.b64encode(
json.dumps(dashboard_config).encode("utf-8")
).decode("utf-8"),
"format": "AUTO",
"overwrite": "true",
}
return self.make_request("post", f"/workspace/import", json=request_data)


databricks = Databricks(os.getenv("DATABRICKS_TOKEN_PREMIUM"))

warehouses = databricks.list_serverless_warehouses()
 
for warehouse in warehouses:
databricks.create_dashboard(warehouse["id"], warehouse["name"])

 

miranda_luna_db
Databricks Employee
Databricks Employee

Glad you've got everything up and running!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group