<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Error trying to edit Job Cluster via Databricks CLI in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136404#M50559</link>
    <description>&lt;P&gt;Hey&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/177328"&gt;@Adam_Borlase&lt;/a&gt;&amp;nbsp;, Thanks for sharing the command and error—this is a common pitfall when trying to control Lakeflow (DLT) compute with cluster policies.&lt;/P&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;What the error means&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;The message “NO_ISOLATION or custom access modes are not allowed in this workspace” indicates your workspace has been configured to disallow the legacy “No isolation shared” and “Custom” access modes. Admins can hide or block the “No isolation shared” mode in the workspace settings; when a cluster or policy attempts to use it, you’ll get exactly this error. Databricks recommends using modern access modes (Standard or Dedicated) rather than “No isolation shared,” which is considered legacy and not recommended.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;Why editing the cluster won’t apply to DLT/Lakeflow pipelines&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Lakeflow Declarative Pipelines (formerly DLT) create ephemeral pipeline clusters on-the-fly. You don’t edit those clusters directly; instead, you attach a compute policy to the pipeline’s compute settings (default and maintenance clusters) so that Databricks enforces the policy when it provisions the pipeline’s compute. Policies for pipeline compute should be written with cluster_type set to dlt, and you can have policy defaults automatically applied by setting apply_policy_default_values to true in the pipeline config.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;How to apply a compute policy to Lakeflow (DLT) compute&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Use one of these supported paths:&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;UI: Open the pipeline, click Settings, uncheck Serverless if you want classic compute, then select your &lt;STRONG&gt;compute policy&lt;/STRONG&gt; in the Compute section and Save. This attaches the policy to both the update and maintenance clusters by default.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;API/CLI (recommended for reproducibility): Update the pipeline definition to include the policy on the clusters definition and apply policy defaults, for example: &lt;CODE&gt;json
{
  "clusters": [
    {
      "label": "default",
      "policy_id": "&amp;lt;policy-id&amp;gt;",
      "apply_policy_default_values": true
    }
  ]
}
&lt;/CODE&gt; In the policy itself, include &lt;CODE&gt;"cluster_type": { "type": "fixed", "value": "dlt" }&lt;/CODE&gt; so it is selectable for pipelines.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;Important: Do not set autotermination_minutes in policies for pipeline compute—the pipeline shuts down its own compute, and this policy setting will cause an error.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;Fix your policy’s access mode&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Your policy (or cluster spec) is likely setting either NO_ISOLATION (legacy “No isolation shared”) or trying to use a legacy/custom mode. In a Unity Catalog-enabled workspace, use one of the UC-compatible access modes:&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Use data_security_mode = USER_ISOLATION to get &lt;STRONG&gt;Standard&lt;/STRONG&gt; access mode (multi-user, isolated).&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Or use data_security_mode = SINGLE_USER to get &lt;STRONG&gt;Dedicated&lt;/STRONG&gt; access mode (assigned to one principal—user or group).&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;Also avoid legacy confs like spark_conf.spark.databricks.cluster.profile in policies; forbid them if necessary: &lt;CODE&gt;json
"spark_conf.spark.databricks.cluster.profile": {
  "type": "forbidden",
  "hidden": true
}
&lt;/CODE&gt;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;If you truly need “No isolation shared”&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Workspace admins can allow or hide “No isolation shared” from Admin settings. If your organization has enforced user isolation, “No isolation shared” is blocked and any attempt to use it will fail as you observed. Given current best practices, Databricks recommends staying on Standard or Dedicated access modes rather than enabling “No isolation shared.”&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;What’s wrong with the CLI command A few issues:&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;clusters edit modifies an existing interactive cluster, not pipeline-created compute. Editing the ephemeral pipeline cluster will not stick; attach the policy to the pipeline instead (UI or API as above).&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;The token “clusterid 16.4.x-scala2.13” is not a cluster ID—it looks like a runtime version string. The clusters edit command expects an actual cluster ID; pipeline clusters are managed by the pipeline and should be controlled via the pipeline config rather than clusters edit.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;To get the desired effect, use the pipeline update path and include policy_id and apply_policy_default_values in the clusters definition; ensure your policy includes cluster_type = dlt and a UC-compatible data_security_mode.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;Hope this helps, Louis.&lt;/DIV&gt;</description>
    <pubDate>Tue, 28 Oct 2025 13:28:53 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2025-10-28T13:28:53Z</dc:date>
    <item>
      <title>Error trying to edit Job Cluster via Databricks CLI</title>
      <link>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136360#M50550</link>
      <description>&lt;P&gt;Good Day all,&lt;BR /&gt;&lt;BR /&gt;After having issues with Cloud resources allocated to Lakeflow jobs and Gateways I am trying to apply a policy to the cluster that is allocated to the Job. I am very new to a lot of the databricks platform and the administration so all help is appreciated.&lt;BR /&gt;&lt;BR /&gt;I have run the following command:&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;databricks clusters edit clusterid 16.4.x-scala2.13 --apply-policy-default-values --policy-id policyid --num-workers 1 -p adam&lt;/LI-CODE&gt;&lt;P&gt;I am now getting the following error:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Error: NO_ISOLATION or custom access modes are not allowed in this workspace. Please contact your workspace administrator to use this feature.&lt;/LI-CODE&gt;&lt;P&gt;I have looked through the account and workspace settings and can't see where I can change this, I have also done a search and it looked like it is editable on the cluster but I can't edit the cluster for the created pipeline.&lt;BR /&gt;&lt;BR /&gt;Is there a problem with my cli command or where do I need to make the correct change to let me apply a compute policy to the DLT compute?&lt;/P&gt;</description>
      <pubDate>Tue, 28 Oct 2025 09:02:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136360#M50550</guid>
      <dc:creator>Adam_Borlase</dc:creator>
      <dc:date>2025-10-28T09:02:41Z</dc:date>
    </item>
    <item>
      <title>Re: Error trying to edit Job Cluster via Databricks CLI</title>
      <link>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136404#M50559</link>
      <description>&lt;P&gt;Hey&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/177328"&gt;@Adam_Borlase&lt;/a&gt;&amp;nbsp;, Thanks for sharing the command and error—this is a common pitfall when trying to control Lakeflow (DLT) compute with cluster policies.&lt;/P&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;What the error means&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;The message “NO_ISOLATION or custom access modes are not allowed in this workspace” indicates your workspace has been configured to disallow the legacy “No isolation shared” and “Custom” access modes. Admins can hide or block the “No isolation shared” mode in the workspace settings; when a cluster or policy attempts to use it, you’ll get exactly this error. Databricks recommends using modern access modes (Standard or Dedicated) rather than “No isolation shared,” which is considered legacy and not recommended.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;Why editing the cluster won’t apply to DLT/Lakeflow pipelines&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Lakeflow Declarative Pipelines (formerly DLT) create ephemeral pipeline clusters on-the-fly. You don’t edit those clusters directly; instead, you attach a compute policy to the pipeline’s compute settings (default and maintenance clusters) so that Databricks enforces the policy when it provisions the pipeline’s compute. Policies for pipeline compute should be written with cluster_type set to dlt, and you can have policy defaults automatically applied by setting apply_policy_default_values to true in the pipeline config.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;How to apply a compute policy to Lakeflow (DLT) compute&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Use one of these supported paths:&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;UI: Open the pipeline, click Settings, uncheck Serverless if you want classic compute, then select your &lt;STRONG&gt;compute policy&lt;/STRONG&gt; in the Compute section and Save. This attaches the policy to both the update and maintenance clusters by default.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;API/CLI (recommended for reproducibility): Update the pipeline definition to include the policy on the clusters definition and apply policy defaults, for example: &lt;CODE&gt;json
{
  "clusters": [
    {
      "label": "default",
      "policy_id": "&amp;lt;policy-id&amp;gt;",
      "apply_policy_default_values": true
    }
  ]
}
&lt;/CODE&gt; In the policy itself, include &lt;CODE&gt;"cluster_type": { "type": "fixed", "value": "dlt" }&lt;/CODE&gt; so it is selectable for pipelines.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;Important: Do not set autotermination_minutes in policies for pipeline compute—the pipeline shuts down its own compute, and this policy setting will cause an error.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;Fix your policy’s access mode&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Your policy (or cluster spec) is likely setting either NO_ISOLATION (legacy “No isolation shared”) or trying to use a legacy/custom mode. In a Unity Catalog-enabled workspace, use one of the UC-compatible access modes:&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Use data_security_mode = USER_ISOLATION to get &lt;STRONG&gt;Standard&lt;/STRONG&gt; access mode (multi-user, isolated).&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Or use data_security_mode = SINGLE_USER to get &lt;STRONG&gt;Dedicated&lt;/STRONG&gt; access mode (assigned to one principal—user or group).&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;Also avoid legacy confs like spark_conf.spark.databricks.cluster.profile in policies; forbid them if necessary: &lt;CODE&gt;json
"spark_conf.spark.databricks.cluster.profile": {
  "type": "forbidden",
  "hidden": true
}
&lt;/CODE&gt;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;If you truly need “No isolation shared”&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;Workspace admins can allow or hide “No isolation shared” from Admin settings. If your organization has enforced user isolation, “No isolation shared” is blocked and any attempt to use it will fail as you observed. Given current best practices, Databricks recommends staying on Standard or Dedicated access modes rather than enabling “No isolation shared.”&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;What’s wrong with the CLI command A few issues:&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;clusters edit modifies an existing interactive cluster, not pipeline-created compute. Editing the ephemeral pipeline cluster will not stick; attach the policy to the pipeline instead (UI or API as above).&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;The token “clusterid 16.4.x-scala2.13” is not a cluster ID—it looks like a runtime version string. The clusters edit command expects an actual cluster ID; pipeline clusters are managed by the pipeline and should be controlled via the pipeline config rather than clusters edit.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;To get the desired effect, use the pipeline update path and include policy_id and apply_policy_default_values in the clusters definition; ensure your policy includes cluster_type = dlt and a UC-compatible data_security_mode.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;Hope this helps, Louis.&lt;/DIV&gt;</description>
      <pubDate>Tue, 28 Oct 2025 13:28:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136404#M50559</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-10-28T13:28:53Z</dc:date>
    </item>
    <item>
      <title>Re: Error trying to edit Job Cluster via Databricks CLI</title>
      <link>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136409#M50560</link>
      <description>&lt;P&gt;Good Afternoon Louis,&lt;BR /&gt;&lt;BR /&gt;Thank you for the detailed answer. The issue I face is that the default gateway is allocating Virtual CPUs which is not in our Quotas so I need to apply the Compute policy at the creation stage. At this point in the pipelines I can see the settings Yaml but have no option to edit the Pipeline (See the attached image) as it stands prior to it completing the setup of a new Lake flow connect on the SQL Server.&lt;BR /&gt;&lt;BR /&gt;I have also tried to update the Pipeline definition as mentioned above and I am getting other failures. What would be the best way to set up a new data ingestion pipeline that applies our Compute policy to ensure we are only using CPUs that are allocated to us?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Do we need to contact our infrastructure time to increase our Quotas or is there a way to control at the creation stage of a new SQL server data ingestion the type of compute it is using? This is the first one we are setting up so very inexperienced with the issues we are facing.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 28 Oct 2025 13:58:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136409#M50560</guid>
      <dc:creator>Adam_Borlase</dc:creator>
      <dc:date>2025-10-28T13:58:42Z</dc:date>
    </item>
    <item>
      <title>Re: Error trying to edit Job Cluster via Databricks CLI</title>
      <link>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136521#M50585</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/177328"&gt;@Adam_Borlase&lt;/a&gt;&amp;nbsp;,&amp;nbsp; Thanks, this is helpful context. The key is that the SQL Server connector’s ingestion pipeline runs on serverless, while the ingestion “gateway” runs on classic compute in your cloud account, so vCPU family quotas can block gateway creation unless you control instance types at creation time with a compute policy and/or the API.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="paragraph"&gt;What’s happening and why quotas matter&lt;/H3&gt;
&lt;UL&gt;
&lt;LI class="paragraph"&gt;The &lt;STRONG&gt;ingestion gateway&lt;/STRONG&gt; runs continuously on classic compute, inside your VNet/VPC, to capture snapshots and change logs; this consumes your cloud vCPUs and can be blocked by per‑VM‑family quotas. You must size/control its VM family during creation.&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;The &lt;STRONG&gt;ingestion pipeline&lt;/STRONG&gt; (that applies staged changes into streaming tables) runs on serverless compute and does not consume your cloud vCPU quotas. You don’t need to control instance families there; you can only select serverless performance mode and apply serverless budget policies for tagging.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Applying a &lt;STRONG&gt;custom policy for the gateway is currently API‑only&lt;/STRONG&gt; (UI doesn’t expose policy selection for the gateway yet). That’s why using the wizard left you without an option to change compute and resulted in quota errors when the default driver VM family wasn’t permitted.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="paragraph"&gt;Best-practice setup to enforce your compute policy at creation&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Use either Databricks Asset Bundles (DAB) or the Pipelines API/CLI to create the gateway and ingestion pipeline while attaching a policy that locks the gateway instance family to one you have quota for.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;1) Create a gateway‑specific compute policy&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Define a dlt policy that fixes allowed node types and enforces UC‑compatible access mode. Databricks recommends using the smallest worker nodes for the gateway because they don’t affect performance; the minimum requirement is 8 cores on the driver to extract changes efficiently.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Example policy definition (adjust instance types to the VM family you have quota for): ```json { "cluster_type": { "type": "fixed", "value": "dlt" },&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;"data_security_mode": { "type": "fixed", "value": "USER_ISOLATION" },&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;"driver_node_type_id": { "type": "allowlist", "values": ["Standard_D8s_v5", "Standard_E8d_v4"], "defaultValue": "Standard_D8s_v5" },&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;"node_type_id": { "type": "allowlist", "values": ["Standard_D8s_v5", "Standard_E8d_v4"], "defaultValue": "Standard_D8s_v5" },&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;"num_workers": { "type": "fixed", "value": 1, "hidden": true } } ```&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Notes: * Many workspaces default the gateway driver to a VM in the EDv4 family (for example Standard_E8d_v4); if that family is quota‑blocked in your region, pick another family that you know has capacity (e.g., Dv5) and put it in the policy allowlist/default.&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Grant yourself permission on the policy so the API can attach it during pipeline creation.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;2) Create the ingestion gateway with the policy attached&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Create the gateway via CLI/API and include the policy on the clusters block; also apply policy defaults: &lt;CODE&gt;bash
databricks pipelines create --json '{
  "name": "sqlserver-gateway",
  "gateway_definition": {
    "connection_id": "&amp;lt;CONNECTION_ID&amp;gt;",
    "gateway_storage_catalog": "main",
    "gateway_storage_schema": "sqlserver01",
    "gateway_storage_name": "sqlserver01-gateway"
  },
  "clusters": [{
    "label": "default",
    "policy_id": "&amp;lt;POLICY_ID&amp;gt;",
    "apply_policy_default_values": true
  }]
}'
&lt;/CODE&gt;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;This ensures the gateway driver/worker types are enforced by your policy at creation time, preventing Databricks from picking an instance family you don’t have quota for.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;3) Create the ingestion pipeline (serverless)&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Create the ingestion pipeline pointing at the gateway; it runs on serverless automatically, so you don’t need quotas or node type control there: &lt;CODE&gt;bash
databricks pipelines create --json '{
  "name": "sqlserver-ingestion-pipeline",
  "ingestion_definition": {
    "ingestion_gateway_id": "&amp;lt;GATEWAY_PIPELINE_ID&amp;gt;",
    "objects": [
      { "schema": {
          "source_catalog": "sqlserver01",
          "source_schema": "dbo",
          "destination_catalog": "main",
          "destination_schema": "sqlserver01"
      }}
    ]
  }
}'
&lt;/CODE&gt;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Optionally, if your account requires serverless budget policies for tagging, select one in the UI when you create/edit the pipeline; this only affects tagging and not compute sizing.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;### Common failure causes when updating the pipeline definition * Trying to attach the policy in the UI wizard for the gateway—currently policy attachment is API‑only for the gateway; use the pipelines create API/CLI with the clusters block as shown above.&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Using a policy with disallowed/legacy access modes (for example, NO_ISOLATION); use UC‑compatible Standard (USER_ISOLATION) or Dedicated (SINGLE_USER) in your policy’s data_security_mode.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Including autotermination_minutes in a policy applied to Lakeflow Declarative Pipelines compute; pipeline clusters auto‑shutdown and this setting causes errors—omit it in policies used for pipelines/gateways.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Passing legacy spark confs like spark.databricks.cluster.profile in policies; forbid them instead to avoid access mode conflicts.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="paragraph"&gt;Do you need to raise cloud quotas, or can you control compute type?&lt;/H3&gt;
&lt;UL&gt;
&lt;LI class="paragraph"&gt;You can avoid quota issues by enforcing the &lt;STRONG&gt;driver_node_type_id/node_type_id&lt;/STRONG&gt; to a VM family with available quota using a compute policy and attaching it at creation time via the API, as shown above.&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;If even the smallest supported families don’t have quota in your region, you’ll need your infra team to request a quota increase on that VM family from the cloud provider; Databricks recommends validating cloud service quotas as part of capacity planning for Lakeflow Connect gateways.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;The ingestion pipeline itself uses &lt;STRONG&gt;serverless compute&lt;/STRONG&gt;, which doesn’t require VM family quotas in your subscription; only the gateway is affected by quotas today.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="paragraph"&gt;Optional: DABs (Asset Bundles) for repeatable deployments&lt;/H3&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;You can also package the gateway and ingestion pipeline definitions into a bundle and deploy across dev/stage/prod; if you go this route, ensure your gateway resource includes the clusters/policy_id block so enforcement happens at deploy time.&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3 class="paragraph"&gt;Quick checklist&lt;/H3&gt;
&lt;UL&gt;
&lt;LI class="paragraph"&gt;Write/assign a &lt;STRONG&gt;dlt&lt;/STRONG&gt; compute policy that allowlists only instance families you have quota for and sets data_security_mode to USER_ISOLATION (Standard).&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Create the &lt;STRONG&gt;gateway via API/CLI&lt;/STRONG&gt; with the policy attached on clusters.label="default" and apply_policy_default_values=true.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;Create the &lt;STRONG&gt;ingestion pipeline&lt;/STRONG&gt; (serverless) pointing at the gateway; no quotas required.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;If you still hit quota errors, have infra request a &lt;STRONG&gt;vCPU quota increase&lt;/STRONG&gt; for the targeted VM family or switch to a family with quota in the policy allowlist.&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Hope this helps, Louis.&lt;/DIV&gt;</description>
      <pubDate>Wed, 29 Oct 2025 09:11:07 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136521#M50585</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-10-29T09:11:07Z</dc:date>
    </item>
    <item>
      <title>Re: Error trying to edit Job Cluster via Databricks CLI</title>
      <link>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136705#M50632</link>
      <description>&lt;P&gt;Thank you so much Louis,&lt;BR /&gt;&lt;BR /&gt;This has resolved all of our issues! Really appreciate the help.&lt;/P&gt;</description>
      <pubDate>Thu, 30 Oct 2025 09:05:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/error-trying-to-edit-job-cluster-via-databricks-cli/m-p/136705#M50632</guid>
      <dc:creator>Adam_Borlase</dc:creator>
      <dc:date>2025-10-30T09:05:16Z</dc:date>
    </item>
  </channel>
</rss>

