06-26-2025 01:09 AM
Hi Databricks community,
Good day.
Do anyone manage to use serverless all purpose cluster in the Databricks Free edition?
As referring to the documentation, it does mentioned that free edition do allow user to create a small serverless all purpose cluster. But in my free edition workspace, there's no option to create all purpose cluster and only serverless SQL warehouse is available.
**note: I am trying to run some pyspark code in notebook, and serverless SQL warehouse only support SQL. If free edition doesn't allow to create all purpose cluster, sounds like there's no other way to run non-SQL notebook/code in free edition.
https://docs.databricks.com/aws/en/getting-started/free-edition-limitations#compute-limitations
My Free edition workspace:
Thanks.
06-26-2025 04:03 AM
Hello @Kai-
You can use serverless compute, and there is no need to provisioned it, Databricks handles that for you.
All what you need to do is, on the top right side, click Connect and choose Serverless, and you are good to go
to run non-SQL code notebooks.
Hope that helps. Best, Ilir
06-26-2025 04:03 AM
Hello @Kai-
You can use serverless compute, and there is no need to provisioned it, Databricks handles that for you.
All what you need to do is, on the top right side, click Connect and choose Serverless, and you are good to go
to run non-SQL code notebooks.
Hope that helps. Best, Ilir
07-28-2025 02:27 AM
But in Databricks free edition, we have only serverless compute, how view spark job, spark UI ?
09-09-2025 11:55 PM
Hi @Navneet670 ,
Is a limitation of serverless compute, spark UI is not available
https://docs.databricks.com/aws/en/compute/serverless/limitations#general-limitations
FYR, another community post that discuss about the same topic:
https://community.databricks.com/t5/get-started-discussions/is-spark-ui-available-on-the-databricks-...
Thanks.
Best Regards,
Kai
06-26-2025 11:45 PM
Hi @ilir_nuredini,
Thanks for the suggestion, it worked!
Just sharing for information, noticed that it will prompt error when choosing Serverless in a imported notebook (tested with html and dbc). But works fine in a new notebook (with same pyspark code).
Imported via DBC:
Same code in new notebook:
Thanks.