Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
I've been trying to use the Community edition for the past 3 days without success.I go to run a Notebook and it begins to allocated the Cluster, but it it never finishes. Sometimes it times out after 15 minutes.Waiting for cluster to start: Finding i...
@Dileep Vidyadara - There did seem to be a problem during the time you posted. For future reference, when you're having trouble, you can check what's going on by going to the AWS Databricks Status Page.Let us know if you have any other questions.
Hello - i've been using the Databricks notebook(for pyspark or scala/spark development), and recently have had issues wherein the cluster creation takes a long time to get created, often timing out. Any ideas on how to resolve this ?
Hi Karankaran.alang,What is the error message you are getting? did you get this error while creating/starting a cluster CE?some times these errors are intermittent and go away after a few re-tries.Thank you
One can create a Cluster(s) using CLuster API @ https://docs.databricks.com/dev-tools/api/latest/clusters.html#create However, REST API 2.0 doesn't provide certain features like "Enable Table Access Control", which has been introduced after REST API ...
I have been trying to create a new cluster to use and multiple attempts have gotten stuck in pending: "Finding instances for new nodes, acquiring more instances if necessary" until they time out. Up to today I have had no problems creating clusters ...
Hi AllI am getting this error for some jobs. Can you please let me know what could be the reasonRun result unavailable: job failed with an error message -Run result unavailable: job failed with error messageUnexpected failure while waiting for the cl...
This is an issue on the cloud level so try to put retries in the job as it happens not for all cluster start , it may fails once but will start after retry,Also, raise a databricks ticket , they will provide permanent solution