To dynamically detect your Databricks environment (dev, qa, prod) in a serverless notebook, without relying on manual REST API calls, you typically need a reliable way to extract context directly inside the notebook. However, serverless notebooks often have limitations in accessing certain workspace or cluster metadata compared to traditional jobs or interactive clusters.
Direct Context Access in Serverless Notebooks
Currently, the most common workaroundโusing spark.conf.get("spark.databricks.xxxxx") and parsing a value as you describedโis largely dependent on how your workspace and resources are named or configured. In a serverless environment, some configs may not be exposed due to isolation/security. However, there are a couple of alternative strategies that might work for you:
1. Workspace and Resource Naming Conventions
If your managed resource groups or workspace names contain the environment marker (such as "dev", "qa", or "prod"), you can try to extract it directly from another configuration or path visible in the notebook.
-
Try reading from other available configs, such as:
-
spark.conf.get("spark.databricks.workspaceUrl")
-
spark.conf.get("spark.databricks.clusterUsageTags.clusterName")
-
Any spark context property containing environment info.
If these are visible, you can parse them as you do with managedResourceGroup:
workspace_url = spark.conf.get("spark.databricks.workspaceUrl")
environment = None
for key, value in dic.items():
if value in workspace_url:
environment = value
break
However, in Databricks serverless, some configs are restricted.
2. Mounts, Paths, or Secrets
If your workspace structure, mounts, or secret scopes are named by environment, you can list them and look for environment markers:
# List mount points (if accessible)
for mount in dbutils.fs.mounts():
if 'dev' in mount.mountPoint:
environment = 'dev'
elif 'qa' in mount.mountPoint:
environment = 'qa'
elif 'prod' in mount.mountPoint:
environment = 'prod'
Or, for secret scopes:
for scope in dbutils.secrets.listScopes():
if 'dev' in scope.name:
environment = 'dev'
# Same for qa/prod
Again, access may be limited in serverless.
3. Group Membership (via Databricks APIs)
Direct access to group membership (to see which group the notebook runs under) almost always requires REST API calls. This is a security feature, especially in serverless, to restrict cross-context access. If access is required, you need to have an access token or configure the workspace to allow reading such metadata.
If REST API use is truly blocked for you, you may need to:
-
Work with your admin to expose a property or config that is accessible.
-
Tag jobs, clusters, or workspace resources with the environment, so your notebook can infer it with available APIs.
4. Job, Cluster, or Notebook Tags
If you can tag jobs or clusters with environment info, check if those tags are available in the context:
# Try reading a tag property
environment = spark.conf.get("spark.databricks.clusterUsageTags.tagName")
Where tagName is a key your team configures containing the environment.
Conclusion
-
Without REST API access: Your options are limited to what Databricks exposes via Spark configs, secrets, mounts, workspace URLs, or resource naming conventions.
-
Serverless limitations: Some Spark configs, cluster tags, or group info may not be visible. Try to extract environment from accessible configs or resource names.
-
Recommendation: Work with your admin/devops to add an environment marker to a readily-accessible config, workspace name, or tag that is visible in serverless.
If you have no way to expose the environment in a config, mount, or resource name, and cannot use REST APIs, serverless notebooks may be fundamentally limited for your use case. Consider raising this with your Databricks admin for a sustainable solution.