Hi @jimoskar ,
Yep, I have some other ideas that we can check. In databricks you have 2 kinds of init scripts:
- global init scripts - a global init script runs on all clusters in your workspace configured with dedicated (formerly single user) or legacy no-isolation shared access mode. So it seems that global init script are supported only for those access mode.
- Cluster-scoped init scripts - Cluster-scoped init scripts are init scripts defined in a cluster configuration. Cluster-scoped init scripts apply to both clusters you create and those created to run jobs.
So, do you happen to use global init script? Because it could be one explanation why it doesn't work in your case.
Another thing to check - what is the size of your init script? In docs they mentioned that:
"The init script cannot be larger than 64KB. If a script exceeds that size, the cluster will fail to launch and a failure message will appear in the cluster log."
Also, you can try to check init script logs. Maybe we will find some additional info/error message there?
And last, but not least you can check below articles. It seems that Windows Notepad and similar editors insert special characters like carriage returns (\r), that could cause some issues:
Init scripts failing with unexpected end of file error - Databricks
Init script stored on a volume fails to execute on cluster start - Databricks