- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2024 06:07 AM
Hi everyone,
I’m experimenting with the Databricks VS Code extension, using Spark Connect to run code locally in my Python environment while connecting to a Databricks cluster. I’m trying to call one notebook from another via:
notebook_params = {
"catalog_name": "dev",
"bronze_schema_name": "bronze",
"silver_schema_name": "silver",
"source_table_name": "source_table",
"target_table_name": "target_table"
}
# Run the notebook with the specified parameters
result = dbutils.notebook.run("merge_table_silver_target_table", 60, notebook_params)
However, I am getting the error:
From what I’ve read, it seems dbutils.notebook.run() relies on the full Databricks notebook context, which might not exist in a local Spark Connect session.
Ask per documentation:
Could you confirm running a notebook with dbutils.notebook.run is not possible in local environment using Databricks-Connect?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2024 02:47 PM
Hi @filipniziol,
it is confirmed that dbutils.notebook.run
relies on the full Databricks notebook context, which is not available in a local Spark Connect session. Therefore, running a notebook with dbutils.notebook.run
is not possible in a local environment using Databricks-Connect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2024 02:47 PM
Hi @filipniziol,
it is confirmed that dbutils.notebook.run
relies on the full Databricks notebook context, which is not available in a local Spark Connect session. Therefore, running a notebook with dbutils.notebook.run
is not possible in a local environment using Databricks-Connect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-31-2024 06:01 AM
@filipniziol just curious to know if getting the context and setting it manually would help, have you tried this approach?
Example:
ctx = dbutils.notebook.entry_point.getDbutils().notebook().getContext()
dbutils.notebook.setContext(ctx)
Or
from pyspark.dbutils import DBUtils
from pyspark.sql import SparkSession
spark = SparkSession.getActiveSession()
_dbutils = DBUtils().get_dbutils(spark)
notebook_context = _dbutils.notebook.entry_point.getDbutils().notebook().getContext()
notebook_context.clusterId().get()
The above, should return some id, eg.: '1231-135641-d7rr9qht-v2y', so I'm curious to know if setting the ctx id via the dbutils.notebook.setContext() would help making it go through.

