3 weeks ago
Why does Databricks require creating AWS resources on our AWS account (IAM role, VPC, subnets, security groups) when deploying a Traditional workspace, even if we plan to use only serverless compute, which runs fully in the Databricks account and only needs an S3 bucket in our AWS account? Is there any way to bypass those classic-compute resource creations?
3 weeks ago
Hi @APJESK ,
To keep the answer simple - no, there is no way to bypass this. You need to deploy workspace (and all related resource) to use serverless.
2 weeks ago
Hey @APJESK , Databricks requires AWS resources such as IAM roles, VPCs, subnets, and security groups when deploying a Traditional workspaceโeven if you plan to use only serverless computeโbecause of how the platform distinguishes between workspace types and the underlying architecture of workspace creation and management.
Hereโs a clearer breakdown:
Traditional Workspaces Assume Classic Compute Will Be Used
A โTraditionalโ (or โclassicโ) Databricks workspace on AWS is designed with the assumption that customer workloads may need classic compute. Clusters run in your AWS account, within your VPC and subnets, secured by your security groups, and governed by your IAM roles. Classic compute clusters (all-purpose and job clusters) are launched directly in your environment, making these network and IAM resources mandatory.
Serverless Compute Differs Architecturally
Serverless compute resources run in a Databricks-managed compute plane, not in your AWS account. With serverless, Databricks manages all infrastructure, including networking and identity. The only required resource in your account is typically an S3 bucket for workspace system data, DBFS, and Unity Catalog-managed storage.
Workspace Creation Workflow Is Driven by Workspace Type
When you create a Traditional workspace, Databricks provisions (or requires you to provision) the VPC, subnets, security groups, and IAM roles for classic computeโregardless of whether you immediately plan to use serverless. Serverless compute is added on top of a Traditional workspace; it is not a separate workspace type.
That said, Databricks recently introduced โServerless Workspacesโ (Public Preview). These are designed specifically for serverless-only environments:
They do not require a customer-managed VPC, subnets, security groups, or cross-account IAM role.
They rely entirely on Databricks-managed infrastructure and default storage.
They are ideal for organizations that want serverless compute without the AWS setup overhead of Traditional workspaces.
Some features (such as custom storage buckets and advanced networking) may be limited or not yet GA in serverless workspaces, so review the documentation before migrating production workloads. And if youโre wondering whether you can convert a classic workspace into a serverless oneโthe answer is no. You would need to create a new Serverless Workspace.
Hope this helps.
Cheers, Louis.
3 weeks ago
Hi @APJESK ,
To keep the answer simple - no, there is no way to bypass this. You need to deploy workspace (and all related resource) to use serverless.
3 weeks ago
Thanks, I understand that, to use Databricks Serverless compute, a full Traditional workspace (with VPC, subnets, IAM roles, etc.) must still be deployed in our AWS account.
However, our security team is not comfortable with this approach. . Our preference was to manage only an S3 bucket for storage to simplify compliance.
It seems that Serverless does not really reduce our networking and security overhead as we expected.
We had assumed serverless compute approach would remove most of the networking burden,
Could you share any alternative options, best practices, or guidance.?
2 weeks ago
Hey @APJESK , Databricks requires AWS resources such as IAM roles, VPCs, subnets, and security groups when deploying a Traditional workspaceโeven if you plan to use only serverless computeโbecause of how the platform distinguishes between workspace types and the underlying architecture of workspace creation and management.
Hereโs a clearer breakdown:
Traditional Workspaces Assume Classic Compute Will Be Used
A โTraditionalโ (or โclassicโ) Databricks workspace on AWS is designed with the assumption that customer workloads may need classic compute. Clusters run in your AWS account, within your VPC and subnets, secured by your security groups, and governed by your IAM roles. Classic compute clusters (all-purpose and job clusters) are launched directly in your environment, making these network and IAM resources mandatory.
Serverless Compute Differs Architecturally
Serverless compute resources run in a Databricks-managed compute plane, not in your AWS account. With serverless, Databricks manages all infrastructure, including networking and identity. The only required resource in your account is typically an S3 bucket for workspace system data, DBFS, and Unity Catalog-managed storage.
Workspace Creation Workflow Is Driven by Workspace Type
When you create a Traditional workspace, Databricks provisions (or requires you to provision) the VPC, subnets, security groups, and IAM roles for classic computeโregardless of whether you immediately plan to use serverless. Serverless compute is added on top of a Traditional workspace; it is not a separate workspace type.
That said, Databricks recently introduced โServerless Workspacesโ (Public Preview). These are designed specifically for serverless-only environments:
They do not require a customer-managed VPC, subnets, security groups, or cross-account IAM role.
They rely entirely on Databricks-managed infrastructure and default storage.
They are ideal for organizations that want serverless compute without the AWS setup overhead of Traditional workspaces.
Some features (such as custom storage buckets and advanced networking) may be limited or not yet GA in serverless workspaces, so review the documentation before migrating production workloads. And if youโre wondering whether you can convert a classic workspace into a serverless oneโthe answer is no. You would need to create a new Serverless Workspace.
Hope this helps.
Cheers, Louis.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now