cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Executing Bash Scripts or Binaries Directly in Databricks Jobs on Single Node Cluster

jorperort
Contributor

Hi,

Is it possible to directly execute a Bash script or a binary executable from the operating system of a Databricks job compute node using a single node cluster?
Iโ€™m using databricks asset bundels  for job initialization and execution. When the job starts, I plan to clone a repository containing Bash scripts and binary executables so that these are available within the single node cluster environment.

The goal is to run one of these scripts or binaries directly from the compute nodeโ€™s operating system, passing parameters at runtime, and without intermediaries like notebooks cells sh , scripts python subprocesos, or init scripts

3 REPLIES 3

eniwoke
Contributor II

Hi @jorperort I know there is no direct task like bash_task for jobs that allows you to run bash scripts without using notebook cells %sh or Python's subprocess. Have you considered using Init scripts for your cluster while setting up the job?

With the init script, you can execute bash scripts to download or even run commands on a cluster. You can try it and tell me how it goes

Eni

Good afternoon @eniwoke @ , and thank you very much for your commen

Yes, I had already considered using scripts, but I was hoping to find a different solution, since the idea is to pass parameters to these Bash scripts or compiled executables. I assume that if I approach it with init_scripts, I would have to retrieve those parameters through environment variables.

 

I'm not sure if I can retrieve through environment variables, but Iโ€™m not certain whether this init script runs before the environment variables have been set.

 

I would prefer another solution. I'm not sure if spark-submit can be used for this, because while it's possible to do a spark-submit of a compiled JAR, if it's not a JAR but another type of compiled file, I donโ€™t know if it can be executed.

 

That's another point: if anyone has encountered this issue, it would be really helpful if they could share their experience. It would be greatly appreciiate.

Louis_Frolio
Databricks Employee
Databricks Employee

Hello @jorperort , I did some research internally and have some tips/suggestions for you to consider:

 

Based on the research and available documentation, it is not possible to directly execute a Bash script or binary executable from the operating system of a Databricks job compute node (even on a single node cluster) in the manner you described.

Key Points from Analysis

  • Databricks Asset Bundles & Job Initialization
    • Asset bundles allow automated cloning and setup of code and assets, and can be configured to initialize environments, install dependencies, and set up scripts using init scripts.
    • There is extensive support and documentation for using init scripts (cluster-scoped, volume-based, or workspace files) to run Bash scripts or install binaries at cluster startup. However, init scripts only run automatically during cluster startup, and you cannot trigger them ad hoc after the cluster is up.
  • Job Task Execution Model
    • The standard Databricks job execution model is highly abstracted. User jobs (from Workflows, job API, or the UI) always execute within a managed containerized environment, via notebooks, JARs, Python scripts, dbt, or other supported tasks. There is no supported mechanism to attach a job or task that instructs the cluster runtime to simply run a Bash script โ€œrawโ€ on the OS at job runtime.
    • Even with single node clusters (where the compute node functions as both driver and worker), the available user-accessible task types are still confined to managed abstractions (notebooks, scripts, SQL, etc.). The compute runs Spark processes and โ€œjobโ€ context; directly launching OS-level processes without these abstractions is not supported nor exposed in the APIs or asset bundle configuration.
  • No Shell/SSH Access
    • Databricks does not provide shell access (e.g., SSH) to job compute nodes, especially on ephemeral job clusters (including single node types). Any mechanisms that do interact with the node OS (such as init scripts) are provisioned and managed by system services and are not interactive or re-entrant during job/task execution.
  • Init Scripts as Only "Native" Mechanism
    • All the current documentation and internal references state that cluster/job initialization scripts (init scripts) are the supported way to execute arbitrary Bash commands or install/run binaries on compute nodes.
    • These scripts must be attached at cluster creation time and run during startup. There is no documented means to attach or execute them at arbitrary points during a job run.
  • Alternative Paths Not Supported
    • There are no API endpoints, CLI commands, or asset bundle configurations that would allow you to submit a raw OS-level command or script to be executed outside of the managed abstractions noted above.
    • All code examples and available templates in the Databricks SDKs, as well as job/task APIs, ultimately execute within these supported frameworks and do not provide an escape hatch to the raw OS command environment at runtime.

Explicit Limitations and Security Considerations

  • Allowing arbitrary โ€œrawโ€ execution of binaries or Bash scripts on the underlying compute node would break the Databricks managed security model and is deliberately not permitted, both to ensure workspace isolation and prevent customer or third-party code from affecting the platform or other users.
  • The ephemeral and containerized nature of job clusters means that even if a script or binary were deposited into the container, there is no โ€œnativeโ€ way to ask the nodeโ€™s system to execute it post-startup, apart from the managed task environments.
What You Can (and Can't) Do - You CAN: - Use an init script (attached during cluster setupโ€”via asset bundle, cluster-scoped, workspace file, or volume) to run OS-level commands during cluster startup. - Execute Bash, Python, or binaries indirectlyโ€”via %sh magic, Python subprocess, or similar constructsโ€”inside notebook cells or scripts. - You CANNOT: - Execute Bash scripts or binaries directly from the native node OS at arbitrary points during a running job, nor can you bypass Databricksโ€™ managed task abstractions.

Table: Supported vs. Unsupported Mechanisms

Mechanism Can Run Bash/Binary Directly on OS? Notes
Init Scripts Yes Only runs at startup, not job runtime
Notebook %sh Cell/Subprocess Indirect, via managed kernel Not โ€œraw OSโ€, but runs in shell subprocess
Databricks Asset Bundles No Can set up scripts, but must call via supported interfaces
SSH/SHELL to Node No Not available for job clusters
Custom Job Task (raw script) No Only notebook, SQL, dbt, JAR, or Python supported

Conclusion

It is not possible to directly execute a Bash script or binary executable from the operating system of a Databricks job compute node on a single node cluster, at job execution time, without intermediaries such as notebook commands, Python subprocess, or init scripts.
 
If your goal is reproducible, automated job setup or initial code execution using scripts or binaries, you must employ init scripts (at startup) or supported notebook/script-based task mechanismsโ€”not direct out-of-band OS execution.
 
I hope this is helpful to you.
 
Cheers, Louis.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now