cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Product Platform Updates
Stay informed about the latest updates and enhancements to the Databricks platform. Learn about new features, improvements, and best practices to optimize your data analytics workflow.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
AlexEsibov
Databricks Employee
Databricks Employee

IMPORTANT NOTE: We have indefinitely delayed the automatic enforcement described below for workspaces that had enabled workspace IP access lists prior to July 29, 2024. We still recommend manually enforcing IP access lists on compute plane requests in these workspaces by taking the steps outlined below. 

Note: New IP access controls enabled on workspaces after July 29, 2024 are still enforced on data plane traffic, per the original communication below.

---------------------------------

Communication

To improve security for Azure Databricks customers, weā€™ll begin applying workspace IP access controls to compute plane traffic. This change will impact workspaces that use both secure cluster connectivity (no public IP) and workspace IP access lists. Weā€™ll begin enforcing this change for all new workspaces starting July 29 2024 and all existing workspaces starting August 26 2024.

Required action

To ensure thereā€™s no disruption to connectivity to the Azure Databricks control plane, youā€™ll need to take one of the following actions:

  1. Add your compute plane IP addresses to the workspace IP access list.
  2. Configure back-end private link for all workspaces.

If you are not the admin responsible for network connectivity to Azure Databricks, please forward this email to that person.  

Note that while this change only impacts secure cluster connectivity workspaces that use workspace IP access lists, Microsoft has announced that default outbound access for VMs in Azure will be retired on 30 September 2025. Therefore, we recommend proactively taking action.

Help and support 

If you have questions, get answers from community experts in Microsoft Q&A. If you have a support plan and you need technical help, open the Azure portal and select the question mark icon at the top of the page. 

Step-by-Step Instructions

Option 1) Add your compute plane IP addresses to the workspace IP access list

Note: If your compute plane traffic egresses through a firewall/proxy appliance, ensure that the IPs of the appliance are added to the workspace IP ACL policy. If it does not, read on for Azure NAT gateway deployment.

Note 2: Azure charges for Azure NAT Gateway. See pricing details here. 

  1. Deploy one or more Azure NAT Gateways, if one doesnā€™t exist already
    1. How to check if Azure NAT Gateway already exists via Azure Portal
      1. Login to portal.azure.com
      2. Select the subscription that your workspace and resource group reside in
      3. Navigate to your Azure Databricks workspace
      4. Select the Resource Group that your workspace is in
      5. Check if there is a resource in your resource group of type ā€œNAT Gatewayā€ - if not, you do not have a NAT gateway
    2. How to check if Azure NAT Gateway already exists via via CLI (cloud shell)
      1.  Query the public subnet for an existing NAT Gateway
        az network vnet subnet show 
        --resource-group <resource group> 
        --vnet <vnet name> 
        --name <public subnet name> 
        --query "natGateway.id"
      2. Take the NAT gateway name from resource id from the previous step example and retrieve the public IP of the NAT gateway
        (/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/brn/providers/Microsoft.Network/natGateways/[NAT_gateway_name])
        
        az network nat gateway show --resource-group <resource group> --name <nat-gateway name> --query publicIpAddresses[0].id
        
      3. Take the public IP from the above command example and confirm the public IP address is static
      4. (/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/brn/providers/Microsoft.Network/publicIPAddresses/[NAT_gateway_name])) 
        
        az network public-ip show 
        --resource-group <resource group> 
        --name <public-ip name> 
        --query "{fqdn: dnsSettings.fqdn, address: ipAddress, type: publicIPAllocationMethod}"
        
        Example : az network public-ip show --resource-group brn --name [NAT_gateway_name]) --query "{fqdn: dnsSettings.fqdn, address: ipAddress, type: publicIPAllocationMethod}"
        {
          "address": "[IP_address]",
          "fqdn": null,
          "type": "[e.g., Static]"
        }
        
    3. How to create a NAT gateway
      1. Follow the steps outlined here to create a NAT gateway via UI or programatically: Manage a NAT gateway - Azure
    4. How to retrieve IPs for NAT gateway
      1. Via Azure Portal
        1. Login to portal.azure.com
        2. Select the subscription that your workspace and resource group reside in
        3. Navigate to your Azure Databricks workspace
        4. Select the Resource Group that your workspace is in
        5. Select the NAT gateway resource
        6. Navigate to ā€œoutbound IPā€
        7. Copy the IP address 
        8. If there are multiple NAT gateways deployed (e.g., for multiple zones), collect all IP addresses for the NAT gateway 
      2. via CLI (cloud shell)
        az network public-ip show 
        --resource-group <resource group> 
        --name <public-ip name> 
        --query "{fqdn: dnsSettings.fqdn, address: ipAddress, type: publicIPAllocationMethod}"
        
  2.  Add the Azure NAT Gateway IP addresses to the workspace IP access list
    1. Follow the steps outlined here to add the IP addresses for the NAT gateways collected above to your workspace IP ACL policy: https://learn.microsoft.com/en-us/azure/databricks/security/network/front-end/ip-access-list-workspa... 
  3. Test that your deployment was successful
    1. Log in to your workspace
    2. Navigate to "Preview" > "View All" 
    3. Find "Enforce IP access list on Compute Plane Requests". On toggle on, IP ACL will be enforced on your NAT IP
    4. Wait for up to 10 minutes for the config to be applied to the workspace.
    5. Create and run a python notebook with a new cluster of any type except serverless.

      Cell #1 

      %pip install databricks-sdk --upgrade
      dbutils.library.restartPython()
      

      Cell #2

      from databricks.sdk import WorkspaceClient
      
      w = WorkspaceClient()
      w.clusters.list()
      
      If the code sample works, then your IP access list is set up correctly.
    6. In case of failures, toggle off "Enforce IP access list on Compute Plane Requests". Wait for up to 10 minutes for the config to be applied to the workspace.
  4.  Optional - Use Azure virtual network service endpoints to access storage. To avoid using NAT for outbound connectivity for accessing storage, you can optionally deploy Azure virtual network service endpoints.

    1. In the Azure portal, go to the Databricks workspace object, click on ā€œsee moreā€ and take note of the public subnet name. 

    2. Click on the Virtual network. open the public (host) subnet for you workspace and find the config entry ā€œservice endpointsā€

    3. In the services drop down choose between ā€œMicrososft.Storageā€ (for in region service endpoint networking) or ā€œMicrosoft.Storage.Globalā€ (for cross region service endpoint networking)  Adding service endpoint networking for Databricks public subnet. Note: this approach has the following important limitations:

      1. Enabling service endpoints will change the route for all storage accounts accessed from that subnet, except routes using private endpoints. This means any routes configured to egress through, for example, a customer firewall, will be bypassed
      2. Each storage account must explicitly allow access from that public subnet. 

Option 2) Configure back-end private link for all workspaces, if not already done

  1. Follow the steps outlined here to configure back-end private link for each workspace: Enable Azure Private Link back-end and front-end connections - Azure Databricks | Microsoft Learn 

Note: Azure charges for Azure Private Link. See details here

 

 

21 Comments
AlexEsibov
Databricks Employee
Databricks Employee

@elvisleung please see the response above from @rugger-bricks and let us know if it resolves your question

MarkusL
New Contributor

Hi, we are affected by this change. We are using classic compute with vnet injection. But we does not have a NAT as explained above nor the backend private link.

If I will go for option 1 by adding the compute plane IP to the IP access list, what IP-address should I add? Is it the addresses of my VNET that I have injected?

AlexEsibov
Databricks Employee
Databricks Employee

hi @MarkusL please deploy a NAT gateway, or configure backend private link. If you go with the former, the instructions above include a step for "Deploy one or more Azure NAT Gateways, if one doesnā€™t exist already". Let me know if you have any questions. 

Said
New Contributor

Hi,

After enabling the IP access list for my databricks workspace, CI/CD pipeline between the DevOps repo and Databricks is failing. The IPs that are failing are those I added to the access list, but all failing IPs are dynamic. How can I mitigate this issue? TIA

AlexEsibov
Databricks Employee
Databricks Employee

@Said to be clear - this communication was scoped only to customers who are already using workspace IP ACLs already. That said, it's hard to know exactly why your scenario is failing - if you have a support subscription, you can follow the steps here to submit a support ticket. That would probably be the easiest way to diagnose the issue: https://docs.databricks.com/en/resources/support.htmlYou can also submit a support case by emailing help@databricks.com

Avvar2022
Contributor

Hi 

Our databricks set up is VNET injection and SCC, enabled private link as simplified deployment. as per the implementation i believe we have one front-end and back-end one Browser authentication private endpoint.

i believe we won't be impacted but wanted to get some inputs from community

Enable Azure Private Link as a simplified deployment - Azure Databricks | Microsoft Learn

Avvar2022_0-1722888065956.png

 

AlexEsibov
Databricks Employee
Databricks Employee

@Avvar2022 correct - if you already have backend private link configured, you do not need to take action on that workspace.

MarkusL
New Contributor

@AlexEsibov we are going to implement the nat-gateway solution here. I have two questions:

1. We have 3 workspaces in dev and 3 in prod. Should we use a unique public-ip/nat-gateway per each workspace or should/could they share the same? The 3 workspaces are devided in to two vnets, please see below clarification:

  • vnet1
    • workspace 1
    • workspace 2
  • vnet2
    • workspace 3

2. Should we NAT the traffic from both the public and private cluster subnets, or is it only the public clusters that communicates with the control plane?

Abadoom
New Contributor

Hello,

In my company we have Databricks deployed in Azure with Simplified terraform deployment. We have the two private endpoints for databricks_ui_api and second browser_authentication. As far as I understand, we don't have to make any updates regarding the security update on 26th August. Is that right, because we are little worried? Thank you!

kyrrewk
New Contributor

How are you supposed to do this when using the setup described here: https://www.databricks.com/blog/data-exfiltration-protection-with-azure-databricks

 

Would you need to deploy a NAT gateway together with the Azure Firewall then?