cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

[FREE TRIAL] Missing All-Purpose Clusters Access - New Account

pabloratache
New Contributor

Issue Description: I created a new Databricks Free Trial account ("For Work" plan with $400 credits) but I don't have access to All-Purpose Clusters or PySpark compute. My workspace only shows SQL-only features.

Current Setup:

- Account Email: ronel.ratache@gmail.com
- Workspace Name: ronel.ratache
- Trial Plan: For Work (Premium Trial)
- Cloud Provider: AWS
- Region: us-west-2
- Account Status: Active, successfully deployed

What I Can Access:

- SQL Warehouses
- Vector Search
- Apps
- Lakehouse Postgres

What I Cannot Access (Missing):

- All-Purpose Clusters
- PySpark Compute
- Structured Streaming
- Job Clusters

What I See in the UI:

Left Sidebar → Compute

Only shows:
- Job Runs
- Data Ingestion

NO option for:
- All-Purpose Clusters
- Compute Resources
- Spark Clusters

Why I Need This: I'm building a Databricks Lakehouse POC project for my data engineering portfolio that specifically requires:

1. PySpark Development - Write and test Spark code
2. Structured Streaming - Real-time data ingestion pipelines
3. Delta Lake Transactions - ACID compliance
4. Medallion Architecture - Bronze/Silver/Gold layers

Questions

1. Is this a provisioning delay?
2. Do I need to request manual enablement for All-Purpose Clusters?
3. Is there a different Free Trial tier that includes PySpark?

Thank you for your help!

@databricks-support or @community-engineers

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee

Ah, got it @pabloratache , I did some digging and here is what I found (learned a few things myself).

Thanks for the detailed context — this behavior is expected for the current Databricks 14‑day Free Trial (“For Work” plan).
 

What’s happening with your Free Trial

  • The 14‑day For Work trial provisions a serverless workspace by design, which exposes SQL Warehouses and serverless compute, not classic All‑Purpose clusters.
  • During the trial, serverless compute is available for notebooks, jobs, and Lakeflow Declarative Pipelines, with scaling capped to 50 DBUs/hr. One SQL warehouse per workspace is allowed (also capped), and GPUs are not available.
  • External network access is limited in trial workspaces, which can affect streaming sources/sinks; the recommended workaround is to upload data into the workspace for use in pipelines.

Answers to your questions

  • 1) Is this a provisioning delay?
    No — this is by design for the Free Trial serverless workspace, not a delay.
  • 2) Do I need manual enablement for All‑Purpose Clusters?
    Manual enablement isn’t available on Free Trials. To use All‑Purpose clusters, you need to upgrade to a paid workspace (Standard/Premium); then you can create clusters (subject to your cloud account quotas).
  • 3) Is there a different Free Trial tier that includes PySpark?
    The trial does include PySpark, but via serverless compute in notebooks (not classic clusters). In a notebook, click Connect (top right) and choose Serverless to run Python/PySpark code.

How to run PySpark now (without clusters)

  • Create or open a notebook, click Connect (top‑right), choose Serverless, set the language to Python, and run your PySpark code there.
# Quick smoke test in a serverless notebook
from pyspark.sql import functions as F

df = spark.range(0, 10).withColumn("ts", F.current_timestamp())
df.write.format("delta").mode("overwrite").saveAsTable("demo.bronze_range")

display(spark.table("demo.bronze_range"))
  • You can orchestrate pipelines with Lakeflow Jobs or Jobs on serverless compute during the trial; note the 50 DBUs/hr cap and external network limits.

Why your UI looks SQL‑only under Compute

  • In serverless trial workspaces, you won’t see the classic “Clusters” page; compute for notebooks is requested inline via the Connect → Serverless flow rather than through the Clusters UI.
 

If you need classic All‑Purpose clusters now

  • Upgrade to a paid plan and use a traditional workspace deployed in your AWS account; then create All‑Purpose clusters and run PySpark/Streaming as needed (you’ll pay cloud infra costs in addition to DBUs after trial).
 

FYI: Trial page contact

  • For onboarding questions, you can email onboarding‑help@databricks.com from the trial page.
 
Hope this makes things clear.
 
Cheers, Louis.

View solution in original post

4 REPLIES 4

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @pabloratache , quick clarification question: are you working on Databricks Free Edition or on a Databricks 14-day Free Trial? The two are quite different in terms of capabilities and workspace behavior. Please advise when you can. Cheers, Louis.

Hi Louis,

Thank you for the quick response!

To clarify: I'm on a **Databricks 14-day Free Trial** (not Free Edition).

Signup path: https://www.databricks.com/try-databricks -> "For Work" plan

The issue is specifically that the Free Trial is provisioned with SQL-only access. I need All-Purpose Clusters enabled for PySpark development.

Is this a provisioning limitation, or can it be manually enabled for Free Trials?

Thank you!

Louis_Frolio
Databricks Employee
Databricks Employee

Ah, got it @pabloratache , I did some digging and here is what I found (learned a few things myself).

Thanks for the detailed context — this behavior is expected for the current Databricks 14‑day Free Trial (“For Work” plan).
 

What’s happening with your Free Trial

  • The 14‑day For Work trial provisions a serverless workspace by design, which exposes SQL Warehouses and serverless compute, not classic All‑Purpose clusters.
  • During the trial, serverless compute is available for notebooks, jobs, and Lakeflow Declarative Pipelines, with scaling capped to 50 DBUs/hr. One SQL warehouse per workspace is allowed (also capped), and GPUs are not available.
  • External network access is limited in trial workspaces, which can affect streaming sources/sinks; the recommended workaround is to upload data into the workspace for use in pipelines.

Answers to your questions

  • 1) Is this a provisioning delay?
    No — this is by design for the Free Trial serverless workspace, not a delay.
  • 2) Do I need manual enablement for All‑Purpose Clusters?
    Manual enablement isn’t available on Free Trials. To use All‑Purpose clusters, you need to upgrade to a paid workspace (Standard/Premium); then you can create clusters (subject to your cloud account quotas).
  • 3) Is there a different Free Trial tier that includes PySpark?
    The trial does include PySpark, but via serverless compute in notebooks (not classic clusters). In a notebook, click Connect (top right) and choose Serverless to run Python/PySpark code.

How to run PySpark now (without clusters)

  • Create or open a notebook, click Connect (top‑right), choose Serverless, set the language to Python, and run your PySpark code there.
# Quick smoke test in a serverless notebook
from pyspark.sql import functions as F

df = spark.range(0, 10).withColumn("ts", F.current_timestamp())
df.write.format("delta").mode("overwrite").saveAsTable("demo.bronze_range")

display(spark.table("demo.bronze_range"))
  • You can orchestrate pipelines with Lakeflow Jobs or Jobs on serverless compute during the trial; note the 50 DBUs/hr cap and external network limits.

Why your UI looks SQL‑only under Compute

  • In serverless trial workspaces, you won’t see the classic “Clusters” page; compute for notebooks is requested inline via the Connect → Serverless flow rather than through the Clusters UI.
 

If you need classic All‑Purpose clusters now

  • Upgrade to a paid plan and use a traditional workspace deployed in your AWS account; then create All‑Purpose clusters and run PySpark/Streaming as needed (you’ll pay cloud infra costs in addition to DBUs after trial).
 

FYI: Trial page contact

  • For onboarding questions, you can email onboarding‑help@databricks.com from the trial page.
 
Hope this makes things clear.
 
Cheers, Louis.

Thank you for taking the time to investigate and explain this. Really appreciate it!

Best regards,
Pablo