cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Regarding Traditional workspace - Classic and Serverless Architecture

APJESK
New Contributor III

Why does Databricks require creating AWS resources on our AWS account (IAM role, VPC, subnets, security groups) when deploying a Traditional workspace, even if we plan to use only serverless compute, which runs fully in the Databricks account and only needs an S3 bucket in our AWS account? Is there any way to bypass those classic-compute resource creations?

3 REPLIES 3

szymon_dybczak
Esteemed Contributor III

Hi @APJESK ,

To keep the answer simple - no, there is no way to bypass this. You need to deploy workspace (and all related resource) to use serverless.

APJESK
New Contributor III

@szymon_dybczak 

Thanks, I understand that, to use Databricks Serverless compute, a full Traditional workspace (with VPC, subnets, IAM roles, etc.) must still be deployed in our AWS account.

However, our security team is not comfortable with this approach. . Our preference was to manage only an S3 bucket for storage to simplify compliance.

It seems that Serverless does not really reduce our networking and security overhead as we expected.

We had assumed serverless compute approach  would remove most of the networking burden, 

Could you share any alternative options, best practices, or guidance.?

BigRoux
Databricks Employee
Databricks Employee

Hey @APJESK , Databricks requires AWS resources such as IAM roles, VPCs, subnets, and security groups when deploying a Traditional workspaceโ€”even if you plan to use only serverless computeโ€”because of how the platform distinguishes between workspace types and the underlying architecture of workspace creation and management.

Hereโ€™s a clearer breakdown:

  1. Traditional Workspaces Assume Classic Compute Will Be Used

    A โ€œTraditionalโ€ (or โ€œclassicโ€) Databricks workspace on AWS is designed with the assumption that customer workloads may need classic compute. Clusters run in your AWS account, within your VPC and subnets, secured by your security groups, and governed by your IAM roles. Classic compute clusters (all-purpose and job clusters) are launched directly in your environment, making these network and IAM resources mandatory.

  2. Serverless Compute Differs Architecturally

    Serverless compute resources run in a Databricks-managed compute plane, not in your AWS account. With serverless, Databricks manages all infrastructure, including networking and identity. The only required resource in your account is typically an S3 bucket for workspace system data, DBFS, and Unity Catalog-managed storage.

  3. Workspace Creation Workflow Is Driven by Workspace Type

    When you create a Traditional workspace, Databricks provisions (or requires you to provision) the VPC, subnets, security groups, and IAM roles for classic computeโ€”regardless of whether you immediately plan to use serverless. Serverless compute is added on top of a Traditional workspace; it is not a separate workspace type.

That said, Databricks recently introduced โ€œServerless Workspacesโ€ (Public Preview). These are designed specifically for serverless-only environments:

  • They do not require a customer-managed VPC, subnets, security groups, or cross-account IAM role.

  • They rely entirely on Databricks-managed infrastructure and default storage.

  • They are ideal for organizations that want serverless compute without the AWS setup overhead of Traditional workspaces.

Some features (such as custom storage buckets and advanced networking) may be limited or not yet GA in serverless workspaces, so review the documentation before migrating production workloads. And if youโ€™re wondering whether you can convert a classic workspace into a serverless oneโ€”the answer is no. You would need to create a new Serverless Workspace.

 

Hope this helps.

Cheers, Louis.