cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Error trying to edit Job Cluster via Databricks CLI

Adam_Borlase
New Contributor III

Good Day all,

After having issues with Cloud resources allocated to Lakeflow jobs and Gateways I am trying to apply a policy to the cluster that is allocated to the Job. I am very new to a lot of the databricks platform and the administration so all help is appreciated.

I have run the following command: 

databricks clusters edit clusterid 16.4.x-scala2.13 --apply-policy-default-values --policy-id policyid --num-workers 1 -p adam

I am now getting the following error:

Error: NO_ISOLATION or custom access modes are not allowed in this workspace. Please contact your workspace administrator to use this feature.

I have looked through the account and workspace settings and can't see where I can change this, I have also done a search and it looked like it is editable on the cluster but I can't edit the cluster for the created pipeline.

Is there a problem with my cli command or where do I need to make the correct change to let me apply a compute policy to the DLT compute?

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee

@Adam_Borlase ,  Thanks, this is helpful context. The key is that the SQL Server connector’s ingestion pipeline runs on serverless, while the ingestion “gateway” runs on classic compute in your cloud account, so vCPU family quotas can block gateway creation unless you control instance types at creation time with a compute policy and/or the API.

 

What’s happening and why quotas matter

  • The ingestion gateway runs continuously on classic compute, inside your VNet/VPC, to capture snapshots and change logs; this consumes your cloud vCPUs and can be blocked by per‑VM‑family quotas. You must size/control its VM family during creation.
  • The ingestion pipeline (that applies staged changes into streaming tables) runs on serverless compute and does not consume your cloud vCPU quotas. You don’t need to control instance families there; you can only select serverless performance mode and apply serverless budget policies for tagging.
  • Applying a custom policy for the gateway is currently API‑only (UI doesn’t expose policy selection for the gateway yet). That’s why using the wizard left you without an option to change compute and resulted in quota errors when the default driver VM family wasn’t permitted.

Best-practice setup to enforce your compute policy at creation

 
Use either Databricks Asset Bundles (DAB) or the Pipelines API/CLI to create the gateway and ingestion pipeline while attaching a policy that locks the gateway instance family to one you have quota for.
 
1) Create a gateway‑specific compute policy
Define a dlt policy that fixes allowed node types and enforces UC‑compatible access mode. Databricks recommends using the smallest worker nodes for the gateway because they don’t affect performance; the minimum requirement is 8 cores on the driver to extract changes efficiently.
Example policy definition (adjust instance types to the VM family you have quota for): ```json { "cluster_type": { "type": "fixed", "value": "dlt" },
"data_security_mode": { "type": "fixed", "value": "USER_ISOLATION" },
"driver_node_type_id": { "type": "allowlist", "values": ["Standard_D8s_v5", "Standard_E8d_v4"], "defaultValue": "Standard_D8s_v5" },
"node_type_id": { "type": "allowlist", "values": ["Standard_D8s_v5", "Standard_E8d_v4"], "defaultValue": "Standard_D8s_v5" },
"num_workers": { "type": "fixed", "value": 1, "hidden": true } } ```
Notes: * Many workspaces default the gateway driver to a VM in the EDv4 family (for example Standard_E8d_v4); if that family is quota‑blocked in your region, pick another family that you know has capacity (e.g., Dv5) and put it in the policy allowlist/default.
  • Grant yourself permission on the policy so the API can attach it during pipeline creation.
2) Create the ingestion gateway with the policy attached
Create the gateway via CLI/API and include the policy on the clusters block; also apply policy defaults: bash databricks pipelines create --json '{ "name": "sqlserver-gateway", "gateway_definition": { "connection_id": "<CONNECTION_ID>", "gateway_storage_catalog": "main", "gateway_storage_schema": "sqlserver01", "gateway_storage_name": "sqlserver01-gateway" }, "clusters": [{ "label": "default", "policy_id": "<POLICY_ID>", "apply_policy_default_values": true }] }'
This ensures the gateway driver/worker types are enforced by your policy at creation time, preventing Databricks from picking an instance family you don’t have quota for.
3) Create the ingestion pipeline (serverless)
Create the ingestion pipeline pointing at the gateway; it runs on serverless automatically, so you don’t need quotas or node type control there: bash databricks pipelines create --json '{ "name": "sqlserver-ingestion-pipeline", "ingestion_definition": { "ingestion_gateway_id": "<GATEWAY_PIPELINE_ID>", "objects": [ { "schema": { "source_catalog": "sqlserver01", "source_schema": "dbo", "destination_catalog": "main", "destination_schema": "sqlserver01" }} ] } }'
Optionally, if your account requires serverless budget policies for tagging, select one in the UI when you create/edit the pipeline; this only affects tagging and not compute sizing.
### Common failure causes when updating the pipeline definition * Trying to attach the policy in the UI wizard for the gateway—currently policy attachment is API‑only for the gateway; use the pipelines create API/CLI with the clusters block as shown above.
  • Using a policy with disallowed/legacy access modes (for example, NO_ISOLATION); use UC‑compatible Standard (USER_ISOLATION) or Dedicated (SINGLE_USER) in your policy’s data_security_mode.
  • Including autotermination_minutes in a policy applied to Lakeflow Declarative Pipelines compute; pipeline clusters auto‑shutdown and this setting causes errors—omit it in policies used for pipelines/gateways.
  • Passing legacy spark confs like spark.databricks.cluster.profile in policies; forbid them instead to avoid access mode conflicts.

Do you need to raise cloud quotas, or can you control compute type?

  • You can avoid quota issues by enforcing the driver_node_type_id/node_type_id to a VM family with available quota using a compute policy and attaching it at creation time via the API, as shown above.
  • If even the smallest supported families don’t have quota in your region, you’ll need your infra team to request a quota increase on that VM family from the cloud provider; Databricks recommends validating cloud service quotas as part of capacity planning for Lakeflow Connect gateways.
  • The ingestion pipeline itself uses serverless compute, which doesn’t require VM family quotas in your subscription; only the gateway is affected by quotas today.

Optional: DABs (Asset Bundles) for repeatable deployments

 
You can also package the gateway and ingestion pipeline definitions into a bundle and deploy across dev/stage/prod; if you go this route, ensure your gateway resource includes the clusters/policy_id block so enforcement happens at deploy time.
 

Quick checklist

  • Write/assign a dlt compute policy that allowlists only instance families you have quota for and sets data_security_mode to USER_ISOLATION (Standard).
  • Create the gateway via API/CLI with the policy attached on clusters.label="default" and apply_policy_default_values=true.
  • Create the ingestion pipeline (serverless) pointing at the gateway; no quotas required.
  • If you still hit quota errors, have infra request a vCPU quota increase for the targeted VM family or switch to a family with quota in the policy allowlist.
 
Hope this helps, Louis.

View solution in original post

4 REPLIES 4

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @Adam_Borlase , Thanks for sharing the command and error—this is a common pitfall when trying to control Lakeflow (DLT) compute with cluster policies.

 

What the error means

The message “NO_ISOLATION or custom access modes are not allowed in this workspace” indicates your workspace has been configured to disallow the legacy “No isolation shared” and “Custom” access modes. Admins can hide or block the “No isolation shared” mode in the workspace settings; when a cluster or policy attempts to use it, you’ll get exactly this error. Databricks recommends using modern access modes (Standard or Dedicated) rather than “No isolation shared,” which is considered legacy and not recommended.
 

Why editing the cluster won’t apply to DLT/Lakeflow pipelines

Lakeflow Declarative Pipelines (formerly DLT) create ephemeral pipeline clusters on-the-fly. You don’t edit those clusters directly; instead, you attach a compute policy to the pipeline’s compute settings (default and maintenance clusters) so that Databricks enforces the policy when it provisions the pipeline’s compute. Policies for pipeline compute should be written with cluster_type set to dlt, and you can have policy defaults automatically applied by setting apply_policy_default_values to true in the pipeline config.
 

How to apply a compute policy to Lakeflow (DLT) compute

Use one of these supported paths:
  • UI: Open the pipeline, click Settings, uncheck Serverless if you want classic compute, then select your compute policy in the Compute section and Save. This attaches the policy to both the update and maintenance clusters by default.
  • API/CLI (recommended for reproducibility): Update the pipeline definition to include the policy on the clusters definition and apply policy defaults, for example: json { "clusters": [ { "label": "default", "policy_id": "<policy-id>", "apply_policy_default_values": true } ] } In the policy itself, include "cluster_type": { "type": "fixed", "value": "dlt" } so it is selectable for pipelines.
Important: Do not set autotermination_minutes in policies for pipeline compute—the pipeline shuts down its own compute, and this policy setting will cause an error.
 

Fix your policy’s access mode

Your policy (or cluster spec) is likely setting either NO_ISOLATION (legacy “No isolation shared”) or trying to use a legacy/custom mode. In a Unity Catalog-enabled workspace, use one of the UC-compatible access modes:
  • Use data_security_mode = USER_ISOLATION to get Standard access mode (multi-user, isolated).
  • Or use data_security_mode = SINGLE_USER to get Dedicated access mode (assigned to one principal—user or group).
Also avoid legacy confs like spark_conf.spark.databricks.cluster.profile in policies; forbid them if necessary: json "spark_conf.spark.databricks.cluster.profile": { "type": "forbidden", "hidden": true }
 

If you truly need “No isolation shared”

Workspace admins can allow or hide “No isolation shared” from Admin settings. If your organization has enforced user isolation, “No isolation shared” is blocked and any attempt to use it will fail as you observed. Given current best practices, Databricks recommends staying on Standard or Dedicated access modes rather than enabling “No isolation shared.”
 

What’s wrong with the CLI command A few issues:

  • clusters edit modifies an existing interactive cluster, not pipeline-created compute. Editing the ephemeral pipeline cluster will not stick; attach the policy to the pipeline instead (UI or API as above).
  • The token “clusterid 16.4.x-scala2.13” is not a cluster ID—it looks like a runtime version string. The clusters edit command expects an actual cluster ID; pipeline clusters are managed by the pipeline and should be controlled via the pipeline config rather than clusters edit.
  • To get the desired effect, use the pipeline update path and include policy_id and apply_policy_default_values in the clusters definition; ensure your policy includes cluster_type = dlt and a UC-compatible data_security_mode.
Hope this helps, Louis.

Adam_Borlase
New Contributor III

Good Afternoon Louis,

Thank you for the detailed answer. The issue I face is that the default gateway is allocating Virtual CPUs which is not in our Quotas so I need to apply the Compute policy at the creation stage. At this point in the pipelines I can see the settings Yaml but have no option to edit the Pipeline (See the attached image) as it stands prior to it completing the setup of a new Lake flow connect on the SQL Server.

I have also tried to update the Pipeline definition as mentioned above and I am getting other failures. What would be the best way to set up a new data ingestion pipeline that applies our Compute policy to ensure we are only using CPUs that are allocated to us? 

Do we need to contact our infrastructure time to increase our Quotas or is there a way to control at the creation stage of a new SQL server data ingestion the type of compute it is using? This is the first one we are setting up so very inexperienced with the issues we are facing.

Louis_Frolio
Databricks Employee
Databricks Employee

@Adam_Borlase ,  Thanks, this is helpful context. The key is that the SQL Server connector’s ingestion pipeline runs on serverless, while the ingestion “gateway” runs on classic compute in your cloud account, so vCPU family quotas can block gateway creation unless you control instance types at creation time with a compute policy and/or the API.

 

What’s happening and why quotas matter

  • The ingestion gateway runs continuously on classic compute, inside your VNet/VPC, to capture snapshots and change logs; this consumes your cloud vCPUs and can be blocked by per‑VM‑family quotas. You must size/control its VM family during creation.
  • The ingestion pipeline (that applies staged changes into streaming tables) runs on serverless compute and does not consume your cloud vCPU quotas. You don’t need to control instance families there; you can only select serverless performance mode and apply serverless budget policies for tagging.
  • Applying a custom policy for the gateway is currently API‑only (UI doesn’t expose policy selection for the gateway yet). That’s why using the wizard left you without an option to change compute and resulted in quota errors when the default driver VM family wasn’t permitted.

Best-practice setup to enforce your compute policy at creation

 
Use either Databricks Asset Bundles (DAB) or the Pipelines API/CLI to create the gateway and ingestion pipeline while attaching a policy that locks the gateway instance family to one you have quota for.
 
1) Create a gateway‑specific compute policy
Define a dlt policy that fixes allowed node types and enforces UC‑compatible access mode. Databricks recommends using the smallest worker nodes for the gateway because they don’t affect performance; the minimum requirement is 8 cores on the driver to extract changes efficiently.
Example policy definition (adjust instance types to the VM family you have quota for): ```json { "cluster_type": { "type": "fixed", "value": "dlt" },
"data_security_mode": { "type": "fixed", "value": "USER_ISOLATION" },
"driver_node_type_id": { "type": "allowlist", "values": ["Standard_D8s_v5", "Standard_E8d_v4"], "defaultValue": "Standard_D8s_v5" },
"node_type_id": { "type": "allowlist", "values": ["Standard_D8s_v5", "Standard_E8d_v4"], "defaultValue": "Standard_D8s_v5" },
"num_workers": { "type": "fixed", "value": 1, "hidden": true } } ```
Notes: * Many workspaces default the gateway driver to a VM in the EDv4 family (for example Standard_E8d_v4); if that family is quota‑blocked in your region, pick another family that you know has capacity (e.g., Dv5) and put it in the policy allowlist/default.
  • Grant yourself permission on the policy so the API can attach it during pipeline creation.
2) Create the ingestion gateway with the policy attached
Create the gateway via CLI/API and include the policy on the clusters block; also apply policy defaults: bash databricks pipelines create --json '{ "name": "sqlserver-gateway", "gateway_definition": { "connection_id": "<CONNECTION_ID>", "gateway_storage_catalog": "main", "gateway_storage_schema": "sqlserver01", "gateway_storage_name": "sqlserver01-gateway" }, "clusters": [{ "label": "default", "policy_id": "<POLICY_ID>", "apply_policy_default_values": true }] }'
This ensures the gateway driver/worker types are enforced by your policy at creation time, preventing Databricks from picking an instance family you don’t have quota for.
3) Create the ingestion pipeline (serverless)
Create the ingestion pipeline pointing at the gateway; it runs on serverless automatically, so you don’t need quotas or node type control there: bash databricks pipelines create --json '{ "name": "sqlserver-ingestion-pipeline", "ingestion_definition": { "ingestion_gateway_id": "<GATEWAY_PIPELINE_ID>", "objects": [ { "schema": { "source_catalog": "sqlserver01", "source_schema": "dbo", "destination_catalog": "main", "destination_schema": "sqlserver01" }} ] } }'
Optionally, if your account requires serverless budget policies for tagging, select one in the UI when you create/edit the pipeline; this only affects tagging and not compute sizing.
### Common failure causes when updating the pipeline definition * Trying to attach the policy in the UI wizard for the gateway—currently policy attachment is API‑only for the gateway; use the pipelines create API/CLI with the clusters block as shown above.
  • Using a policy with disallowed/legacy access modes (for example, NO_ISOLATION); use UC‑compatible Standard (USER_ISOLATION) or Dedicated (SINGLE_USER) in your policy’s data_security_mode.
  • Including autotermination_minutes in a policy applied to Lakeflow Declarative Pipelines compute; pipeline clusters auto‑shutdown and this setting causes errors—omit it in policies used for pipelines/gateways.
  • Passing legacy spark confs like spark.databricks.cluster.profile in policies; forbid them instead to avoid access mode conflicts.

Do you need to raise cloud quotas, or can you control compute type?

  • You can avoid quota issues by enforcing the driver_node_type_id/node_type_id to a VM family with available quota using a compute policy and attaching it at creation time via the API, as shown above.
  • If even the smallest supported families don’t have quota in your region, you’ll need your infra team to request a quota increase on that VM family from the cloud provider; Databricks recommends validating cloud service quotas as part of capacity planning for Lakeflow Connect gateways.
  • The ingestion pipeline itself uses serverless compute, which doesn’t require VM family quotas in your subscription; only the gateway is affected by quotas today.

Optional: DABs (Asset Bundles) for repeatable deployments

 
You can also package the gateway and ingestion pipeline definitions into a bundle and deploy across dev/stage/prod; if you go this route, ensure your gateway resource includes the clusters/policy_id block so enforcement happens at deploy time.
 

Quick checklist

  • Write/assign a dlt compute policy that allowlists only instance families you have quota for and sets data_security_mode to USER_ISOLATION (Standard).
  • Create the gateway via API/CLI with the policy attached on clusters.label="default" and apply_policy_default_values=true.
  • Create the ingestion pipeline (serverless) pointing at the gateway; no quotas required.
  • If you still hit quota errors, have infra request a vCPU quota increase for the targeted VM family or switch to a family with quota in the policy allowlist.
 
Hope this helps, Louis.

Adam_Borlase
New Contributor III

Thank you so much Louis,

This has resolved all of our issues! Really appreciate the help.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now