cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Get managedResourceGroup from serverless

carlos_tasayco
Contributor

Hello,

In my job I have a task where I should modify a notebook to get dynamically the environment, for example:

This is how we get it:

dic = {"D":"dev", "Q":"qa", "P":"prod"}
managedResourceGroup = spark.conf.get("spark.databricks.xxxxx")
xxxxx_Index = managedResourceGroup.find('XXXX')
environment = managedResourceGroup[xxxxx_Index+6:(xxxx_Index+7)]

Basically with that workaround I get the environment, but in a serverless notebook I cannot, I checked with the Assistant but his solution is not very good for me since I have to get access to the rest api. 
 
Do you know how can I do it? I think like to get access to the groups (groups name has the environment inside) but again ask me to put the rest api first.
 
Thanks in advance for your help..
1 REPLY 1

mark_ott
Databricks Employee
Databricks Employee

To dynamically detect your Databricks environment (dev, qa, prod) in a serverless notebook, without relying on manual REST API calls, you typically need a reliable way to extract context directly inside the notebook. However, serverless notebooks often have limitations in accessing certain workspace or cluster metadata compared to traditional jobs or interactive clusters.

Direct Context Access in Serverless Notebooks

Currently, the most common workaroundโ€”using spark.conf.get("spark.databricks.xxxxx") and parsing a value as you describedโ€”is largely dependent on how your workspace and resources are named or configured. In a serverless environment, some configs may not be exposed due to isolation/security. However, there are a couple of alternative strategies that might work for you:

1. Workspace and Resource Naming Conventions

If your managed resource groups or workspace names contain the environment marker (such as "dev", "qa", or "prod"), you can try to extract it directly from another configuration or path visible in the notebook.

  • Try reading from other available configs, such as:

    • spark.conf.get("spark.databricks.workspaceUrl")

    • spark.conf.get("spark.databricks.clusterUsageTags.clusterName")

    • Any spark context property containing environment info.

If these are visible, you can parse them as you do with managedResourceGroup:

python
workspace_url = spark.conf.get("spark.databricks.workspaceUrl") environment = None for key, value in dic.items(): if value in workspace_url: environment = value break

However, in Databricks serverless, some configs are restricted.

2. Mounts, Paths, or Secrets

If your workspace structure, mounts, or secret scopes are named by environment, you can list them and look for environment markers:

python
# List mount points (if accessible) for mount in dbutils.fs.mounts(): if 'dev' in mount.mountPoint: environment = 'dev' elif 'qa' in mount.mountPoint: environment = 'qa' elif 'prod' in mount.mountPoint: environment = 'prod'

Or, for secret scopes:

python
for scope in dbutils.secrets.listScopes(): if 'dev' in scope.name: environment = 'dev' # Same for qa/prod

Again, access may be limited in serverless.

3. Group Membership (via Databricks APIs)

Direct access to group membership (to see which group the notebook runs under) almost always requires REST API calls. This is a security feature, especially in serverless, to restrict cross-context access. If access is required, you need to have an access token or configure the workspace to allow reading such metadata.

If REST API use is truly blocked for you, you may need to:

  • Work with your admin to expose a property or config that is accessible.

  • Tag jobs, clusters, or workspace resources with the environment, so your notebook can infer it with available APIs.

4. Job, Cluster, or Notebook Tags

If you can tag jobs or clusters with environment info, check if those tags are available in the context:

python
# Try reading a tag property environment = spark.conf.get("spark.databricks.clusterUsageTags.tagName")

Where tagName is a key your team configures containing the environment.

Conclusion

  • Without REST API access: Your options are limited to what Databricks exposes via Spark configs, secrets, mounts, workspace URLs, or resource naming conventions.

  • Serverless limitations: Some Spark configs, cluster tags, or group info may not be visible. Try to extract environment from accessible configs or resource names.

  • Recommendation: Work with your admin/devops to add an environment marker to a readily-accessible config, workspace name, or tag that is visible in serverless.

 

If you have no way to expose the environment in a config, mount, or resource name, and cannot use REST APIs, serverless notebooks may be fundamentally limited for your use case. Consider raising this with your Databricks admin for a sustainable solution.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now