Recommendation: if the external SFTP vendor strictly requires source-IP allowlisting, the most reliable path is usually classic compute with your own NAT gateway/static public IP. For serverless, Azure Databricks can reach public external resources via NAT IPs, but obtaining a deterministic allowlistable outbound IP set is not a simple self-serve workflow today and may require account-team/private-preview support.
Option 1 โ Recommended
Use classic compute (ideally VNet-injected) with your own NAT gateway / static public IP, and have the SFTP provider whitelist that IP. Databricks docs explicitly recommend stable egress IP for external systems when allowlisting is required.
Option 2
Stay on serverless, but involve your Databricks account team to obtain/enable the serverless outbound IP / stable NAT IP path. Azure docs note that serverless reaches non-private resources using NAT IPs, and the newer outbound-IP mechanism is in preview and delivered via a JSON endpoint, while old static lists are being retired.
Option 3
If the provider can expose the SFTP endpoint through Azure Private Link / a private endpoint path (for example, via an Azure-hosted front end or your VNet), use an NCC private endpoint from serverless. This is the cleanest serverless-native option, but it is only practical if the endpoint can be presented as an Azure/VNet private target rather than a generic internet SFTP host.
A few practical notes:
- The Lakeflow Connect SFTP connector is supported on serverless and classic (DBR 17.3+), and the docs specifically say the SFTP server must allow either the Databricks VPC/VNet range for classic or the stable IPs for serverless.
- If you use serverless egress control, you can explicitly allow the SFTP FQDN, but that controls Databricks outbound policy; it does not replace the vendorโs inbound source-IP allowlist requirement.