08-23-2023 12:52 PM
I have a large Docker image in our AWS ECR repo. The image is 27.4 GB locally and 11539.79 MB compressed in ECR.
The error from the Event Log is:
Failed to add 2 containers to the compute. Will attempt retry: true. Reason: Docker image pull failure
JSON:
{ "reason": { "code": "DOCKER_IMAGE_PULL_FAILURE", "type": "SERVICE_FAULT", "parameters": { "instance_id": "i-0172cf9b70a25df47", "databricks_error_message": "Downloading docker image has timed out" } }, "add_node_failure_details": { "failure_count": 2, "resource_type": "container", "will_retry": true } }
10-31-2023 07:54 AM
I'm having the same issue--the official Databricks runtime GPU images are already quite large, so using them as a base causes you to run into this timeout issue. Did anyone ever find a fix?
11-13-2023 01:05 PM
I have a similar problem. a 10gb image pulls fine but a 31gb image doesnt. both workers and drivers have 64gb memory. i get the timeout error with "Cannot launch the cluster because pulling the docker image failed. Please double check connectivity from workers to the container registry, as well as the credentials used to pull the image"
were you able to figure out a solution?
11-16-2023 08:38 AM
@Retired_mod it's not possible to change the timeout value for the Docker image pull on a Databricks cluster. That isn't exposed to the user.
11-16-2023 08:40 AM
The only solution as of now is to reduce the size of your image--try a smaller base image, don't build multiple intermediate images that build off of each other, reduce the number of layers, aggressively purge apt and pip caches, etc.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group