cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Client.UserInitiatedShutdown

gazzyjuruj
Contributor II

Hi,

Everything seemed fine until right now i've been getting Client.UserInitiatedShutdown error

What is wrong?

Thanks.

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager

Hi @Ghazanfar Urujโ€‹ , Thank you for your appreciation!

Sometimes a cluster is terminated unexpectedly, not due to a manual termination or a configured automatic termination. A cluster can be terminated for many reasons. Some terminations are initiated by Databricks and others are initiated by the cloud provider. This article describes termination reasons and steps for remediation.

In your case,

The Spark driver is a single point of failure because it holds all cluster states. If the instance hosting the driver node is shut down, Databricks terminates the cluster.

In AWS, standard error codes include:

Client.UserInitiatedShutdown

The instance was terminated by a direct request to AWS, which did not originate from Databricks. Contact your AWS administrator for more details.

Have a nice day you too!

View solution in original post

4 REPLIES 4

Kaniz
Community Manager
Community Manager

Hi @Ghazanfar Urujโ€‹ , Would you mind elaborating more about the issue?

Hi, Thanks for the response finally, now wonder why you are the #1 contributor!

So i go to run some project which is a bit storage heavy, not too much tho maybe 1.5 gb or so

and it closes the connection in about less than 1 hour while im working on it.

Then i try to reconnect and the error i get is:-

Waiting for cluster to start. Driver node shut down by cloud provider, instant_id: i-0fa2a208f5831b55.., aws_instance_state_reason: Client.UserInitiatedShutdown, aws_error_message: Client.UserInitiatedShutdownL User Initiated...

What am i doing wrong here how should it be fixed?

Thanks and have a nice day ahead.

Kaniz
Community Manager
Community Manager

Hi @Ghazanfar Urujโ€‹ , Thank you for your appreciation!

Sometimes a cluster is terminated unexpectedly, not due to a manual termination or a configured automatic termination. A cluster can be terminated for many reasons. Some terminations are initiated by Databricks and others are initiated by the cloud provider. This article describes termination reasons and steps for remediation.

In your case,

The Spark driver is a single point of failure because it holds all cluster states. If the instance hosting the driver node is shut down, Databricks terminates the cluster.

In AWS, standard error codes include:

Client.UserInitiatedShutdown

The instance was terminated by a direct request to AWS, which did not originate from Databricks. Contact your AWS administrator for more details.

Have a nice day you too!

Kaniz
Community Manager
Community Manager

Hi @Ghazanfar Urujโ€‹ , Just a friendly follow-up. Do you still need help, or does my response help you to find the solution? Please let us know.