- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2025 11:25 AM
I'm using a customer container *and* init scripts. At runtime, I get this error:
Cluster '...' was terminated. Reason: INIT_SCRIPT_FAILURE (CLIENT_ERROR). Parameters: instance_id:i-0440ddd3a2d5cce79, databricks_error_message:Cluster scoped init script s3://<our_bucket>/<our_init_script.sh> failed: Timed out with exception after 5 attempts (debugStr = 'Reading remote file for init script'), Caused by: com.databricks.objectstore.location.PermanentStorageException$AwsForbidden: Missing credentials to access AWS bucket.
It worked previously *without* the container, so I'm pretty sure the use of container is triggering the problem. I suspect that the fetch-init-scripts-from-s3 operation is occurring *inside* the container, and that the container itself lacks AWS credentials. What's the preferred way to pass in AWS credentials to a custom container?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2025 10:29 AM
Followup: I got the AWS creds working by amending our AWS role to permit read/write access to our S3 bucket. Woohoo!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2025 02:51 PM
Hey! It would be great if you could share more details about the cluster type and access mode.
If you are using, for example, an all-purpose cluster with shared access mode, I recommend configuring the "Init Script" option inside the advanced cluster settings.
If Unity Catalog is enabled, ensure that your S3 path (`s3://<our_bucket>/<our_init_script.sh>`) is allowed under Catalog Explorer > Allowed Jars/init Scripts.
It would also be helpful to understand why you want to use the init script, as there might be other options available.
I hope you find this helpful. 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2025 08:01 AM
>If you are using, for example, an all-purpose cluster with shared access mode, I recommend configuring the "Init Script" option inside the advanced cluster settings.
Yep, that's the approach. I've got init scripts specified in the cluster settings, and am encountering the "Missing credentials to access AWS bucket" when my job runs.
>It would also be helpful to understand why you want to use the init script, as there might be other options available.
We need to a variety of variables that are only known when the job starts, as well as start up some processes on the container (mostly for telemetry/logging retention).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2025 10:29 AM
Followup: I got the AWS creds working by amending our AWS role to permit read/write access to our S3 bucket. Woohoo!

