cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Spark Failure Error Unable to download spark docker Image

Soma
Valued Contributor

Cluster terminated. Reason: Spark Image Download Failure

  "reason": { "code": "SPARK_IMAGE_DOWNLOAD_FAILURE", "type": "SERVICE_FAULT", "parameters": { "instance_id": "6565aa39b0ae4fe69c7fe6f313e3ca2a", "databricks_error_message": "Failed to set up the docker container due to a spark image download failure: Failed to download Spark image release__9.1.x-snapshot-scala2.12__databricks-universe__head__ea1927f__22a25a8__jenkins__26c6769__format-2 with exit code 1, stdout = , stderr = 2022/05/25 12:34:21 INFO worker_common.py:443: Acquiring lock file: /var/lib/lxc/base-images/release__9.1.x-snapshot-scala2.12__databricks-universe__head__ea1927f__22a25a8__jenkins__26c6769__format-2.lock\nTraceback (most recent call last):\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/pex.py\", line 339, in execute\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/pex.py\", line 267, in _wrap_coverage\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/pex.py\", line 299, in _wrap_profiling\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/pex.py\", line 382, in _execute\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/pex.py\", line 440, in execute_entry\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/pex.py\", line 445, in execute_module\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/hacked_runpy.py\", line 308, in run_module_with_sys_modules_override\n File \"/home/ubuntu/databricks/scripts/update_worker/.bootstrap/_pex/hacked_runpy.py\", line 90, in _run_code\n File \"/home/ubuntu/databricks/scripts/update_worker/manager/scripts/update_worker.py\", line 217, in <module>\n File \"/home/ubuntu/databricks/scripts/update_worker/manager/scripts/update_worker.py\", line 208, in main\n File \"/home/ubuntu/databricks/scripts/update_worker/manager/scripts/worker_spark.py\", line 168, in handle_spark\n File \"/home/ubuntu/databricks/scripts/update_worker/manager/scripts/worker_common.py\", line 444, in acquire_lock_file\nIOError: [Errno 30] Read-only file system: '/var/lib/lxc/base-images/release__9.1.x-snapshot-scala2.12__databricks-universe__head__ea1927f__22a25a8__jenkins__26c6769__format-2.lock'\n" } } }

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager

Hi @somanath Sankaran​, Sometimes a cluster is terminated unexpectedly, not due to a manual termination or a configured automatic termination. A group can be removed for many reasons. Azure Databricks initiate some stops, and the cloud provider undertakes others. This article describes termination reasons and steps for remediation.

View solution in original post

2 REPLIES 2

Kaniz
Community Manager
Community Manager

Hi @somanath Sankaran​, Sometimes a cluster is terminated unexpectedly, not due to a manual termination or a configured automatic termination. A group can be removed for many reasons. Azure Databricks initiate some stops, and the cloud provider undertakes others. This article describes termination reasons and steps for remediation.

Kaniz
Community Manager
Community Manager

Hi @somanath Sankaran​, Have you enabled container services on your cluster?

To use custom containers on your clusters, a workspace administrator must enable Databricks Container Services as follows:

  1. Go to the admin console.
  2. Click the Workspace Settings tab.
  3. In the Cluster section, click the Container Services toggle. Click Confirm.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.