Hello @jorperort , I did some research internally and have some tips/suggestions for you to consider:
Based on the research and available documentation, it is not possible to directly execute a Bash script or binary executable from the operating system of a Databricks job compute node (even on a single node cluster) in the manner you described.
Key Points from Analysis
-
Databricks Asset Bundles & Job Initialization
- Asset bundles allow automated cloning and setup of code and assets, and can be configured to initialize environments, install dependencies, and set up scripts using init scripts.
- There is extensive support and documentation for using init scripts (cluster-scoped, volume-based, or workspace files) to run Bash scripts or install binaries at cluster startup. However, init scripts only run automatically during cluster startup, and you cannot trigger them ad hoc after the cluster is up.
-
Job Task Execution Model
- The standard Databricks job execution model is highly abstracted. User jobs (from Workflows, job API, or the UI) always execute within a managed containerized environment, via notebooks, JARs, Python scripts, dbt, or other supported tasks. There is no supported mechanism to attach a job or task that instructs the cluster runtime to simply run a Bash script โrawโ on the OS at job runtime.
- Even with single node clusters (where the compute node functions as both driver and worker), the available user-accessible task types are still confined to managed abstractions (notebooks, scripts, SQL, etc.). The compute runs Spark processes and โjobโ context; directly launching OS-level processes without these abstractions is not supported nor exposed in the APIs or asset bundle configuration.
-
No Shell/SSH Access
- Databricks does not provide shell access (e.g., SSH) to job compute nodes, especially on ephemeral job clusters (including single node types). Any mechanisms that do interact with the node OS (such as init scripts) are provisioned and managed by system services and are not interactive or re-entrant during job/task execution.
-
Init Scripts as Only "Native" Mechanism
- All the current documentation and internal references state that cluster/job initialization scripts (init scripts) are the supported way to execute arbitrary Bash commands or install/run binaries on compute nodes.
- These scripts must be attached at cluster creation time and run during startup. There is no documented means to attach or execute them at arbitrary points during a job run.
-
Alternative Paths Not Supported
- There are no API endpoints, CLI commands, or asset bundle configurations that would allow you to submit a raw OS-level command or script to be executed outside of the managed abstractions noted above.
- All code examples and available templates in the Databricks SDKs, as well as job/task APIs, ultimately execute within these supported frameworks and do not provide an escape hatch to the raw OS command environment at runtime.
Explicit Limitations and Security Considerations
- Allowing arbitrary โrawโ execution of binaries or Bash scripts on the underlying compute node would break the Databricks managed security model and is deliberately not permitted, both to ensure workspace isolation and prevent customer or third-party code from affecting the platform or other users.
- The ephemeral and containerized nature of job clusters means that even if a script or binary were deposited into the container, there is no โnativeโ way to ask the nodeโs system to execute it post-startup, apart from the managed task environments.
What You Can (and Can't) Do - You CAN: - Use an init script (attached during cluster setupโvia asset bundle, cluster-scoped, workspace file, or volume) to run OS-level commands during cluster startup. - Execute Bash, Python, or binaries indirectlyโvia %sh magic, Python subprocess, or similar constructsโinside notebook cells or scripts. - You CANNOT: - Execute Bash scripts or binaries directly from the native node OS at arbitrary points during a running job, nor can you bypass Databricksโ managed task abstractions.
Table: Supported vs. Unsupported Mechanisms
Mechanism |
Can Run Bash/Binary Directly on OS? |
Notes |
Init Scripts |
Yes |
Only runs at startup, not job runtime |
Notebook %sh Cell/Subprocess |
Indirect, via managed kernel |
Not โraw OSโ, but runs in shell subprocess |
Databricks Asset Bundles |
No |
Can set up scripts, but must call via supported interfaces |
SSH/SHELL to Node |
No |
Not available for job clusters |
Custom Job Task (raw script) |
No |
Only notebook, SQL, dbt, JAR, or Python supported |
Conclusion
It is not possible to directly execute a Bash script or binary executable from the operating system of a Databricks job compute node on a single node cluster, at job execution time, without intermediaries such as notebook commands, Python subprocess, or init scripts.
If your goal is reproducible, automated job setup or initial code execution using scripts or binaries, you must employ init scripts (at startup) or supported notebook/script-based task mechanismsโnot direct out-of-band OS execution.
I hope this is helpful to you.
Cheers, Louis.