cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Cluster scoped init script failing

pablobd
Contributor II

I am creating a cluster with asset bundles and adding a init script to it with asset bundles too. The init script is a .sh file in a UC Volume. When a I run a job, the cluster spins up and fails with this error:

Cluster '****' was terminated. Reason: INIT_SCRIPT_FAILURE (CLIENT_ERROR). Parameters: instance_id:****, databricks_error_message:Cluster scoped init script /Volumes/***.sh failed: Script exit status is non-zero.

 

I enabled the login to S3 and the error logged says:

bash: line 11: /Volumes/***.sh: Permission denied


The Principal spinning the cluster has ALL PRIVILEGES granted to the volume and file. The init script is quite simple (just to test that it can run):

#!/bin/bash
printf "Hello world!!"
1 ACCEPTED SOLUTION

Accepted Solutions

pablobd
Contributor II

It's actually solved, the principal that had permissions was the integ one, the job was the prod one. So, I needed to give permissions to the prod principal and now it runs.

View solution in original post

2 REPLIES 2

pablobd
Contributor II

It's actually solved, the principal that had permissions was the integ one, the job was the prod one. So, I needed to give permissions to the prod principal and now it runs.

PabloCSD
Valued Contributor

Hello Pablo, where did you changed the permissons, we are having the exact same issue, but with dbx.

We are using .sh for installing a library (just doing "pip install ***.whl")

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now