Databricks doesn't stop compute resources in GCP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2023 11:59 AM
I started using Databricks in Google Cloud but it charges some unexpected costs.
When I create a cluster I notice some compute resources being created in GCP but when I stop the cluster these resources are still up and never shut down. This issue results in some additional charges that exceed the cost per DBU used.
I noticed some alerts in Kubernets that say:
- Pod is blocking scale down because it doesn’t have enough Pod Disruption Budget (PDB)
- Can’t scale up a node pool because of a failing scheduling predicate
I'm not sure if it is related to this issue in any way.
Thanks!
- Labels:
-
GCP
-
Google
-
Google cloud
-
Stop
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2023 02:38 AM
how do you deploy your cluster(s)?
I see you mentioning k8s, so it might be the config.
But it could also be a Databricks bug as dbrx is only available on GCP for a short while.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2023 09:52 AM
Basically I just created some singlenode clusters inside Databricks, without configuring anything in GCP.
I see after cluster creation Databricks creates some resources in GCP (Kubernetes clusters, compute instances and instance groups). I deleted these singlenode clusters in databricks but resources in GCP keep active. I tried to delete them directly in GCP, and it works, but minutes later they are automatically created again.
I have been using Databricks on Azure without any issues in billing. I also think it could be something related to GCP.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2023 10:47 PM
Strange.
That does seem like a bug/feature indeed; or there is something else running on dbrx like a sql warehouse or a job?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2023 08:13 AM
Not actually, I just have a blank workspace. I had had created a workspace with a small table in the hivemetastore, one singlenode cluster and a job, but I completely deleted that workspace.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-13-2023 02:34 PM
The answer to the question about the kubernetes cluster regardless of dbx compute and dwh resources running is provided in this thread: https://community.databricks.com/s/question/0D58Y00009TbWqtSAF/auto-termination-for-clusters-jobs-an...
![](/skins/images/B38AF44D4BD6CE643D2A527BE673CCF6/responsive_peak/images/icon_anonymous_message.png)
![](/skins/images/B38AF44D4BD6CE643D2A527BE673CCF6/responsive_peak/images/icon_anonymous_message.png)