- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2022 03:36 AM
Dear connections,
I'm unable to run a shell script which contains scheduling a Cron job through init script method on Azure Data bricks cluster nodes.
Error from Azure Data bricks workspace:
"databricks_error_message": "Cluster scoped init script dbfs:/<shell-script> failed: Script exit status is non-zero"
Looking for a method to achieve the same.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2022 09:09 AM
Hello @Sugumar Srinivasan Could you please enable cluster log delivery and inspect the INIT script logs in the below path dbfs:/cluster-logs/<clusterId>/init_scripts path.
https://docs.databricks.com/clusters/configure.html#cluster-log-delivery-1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2022 10:22 AM
Hello @Sugumar Srinivasan - Welcome to the community! It's nice to meet you. My name is Piper, and I'm a moderator for Databricks.
Let's give the other members a chance to answer your question. We'll come back to this if we need to. 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2022 11:53 PM
I do not completely understand what you are trying to do?
Are you trying to schedule a cron job on the cluster workers?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-23-2022 05:08 AM
Hi werners,
I need to perform the cleanup of azure data bricks driver logs (std.out, std.err, log4j) from dbfs path every hour. to achieve this I'm trying to schedule one Cron job on data bricks driver node so that logs can be deleted every one hour. While using below script in init, the azure databricks cluster creation is failing.
The shell script contains below content:
dbfs:/FileStore/crontab-setup-for-log-cleanup.sh
sudo -H -u root bash -c 'echo "$(echo "* */5 * * * sh /dbfs/FileStore/driver-logs-cleanup.sh" ; crontab -l 2>&1)" | crontab -'
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-23-2022 05:15 AM
Does the script work while the cluster is running, to exclude a issue with the script itself?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-26-2022 09:56 AM
Yes I'm able to make a cron entry in the running cluster's spark driver node through ssh.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-26-2022 02:17 AM
DataBricks Cluster Creation is failing while running the cron job scheduling script through init Script Method from Azure Data Bricks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2022 06:12 AM
the issue is definitely the init script.Please cross check the init script or you can post it here if no sensitive info. we can cross verify.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2022 10:07 PM
@Sugumar Srinivasan are you able to check the init script to rectify the issue?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2022 12:04 AM
@Atanu Sarkar ,
I m using the below shell script for trying to schedule the crontab in databricks node.
crontab-setup-for-log-cleanup.sh
sudo -H -u root bash -c 'echo "$(echo '* */3 * * * sh /dbfs/FileStore/driver-logs-cleanup.sh' ; crontab -l 2>&1)" | crontab -'
driver-logs-cleanup.sh
#!/bin/bash
#Find the the older stdout, stderr and log4j files and deletes it accordingly.
find /dbfs/cluster-logs/<db-cluster-id>/driver/ -name "stdout--*" -exec rm -f {} \;
find /dbfs/cluster-logs/<db-cluster-id>/driver/ -name "stderr--*" -exec rm -f {} \;
find /dbfs/cluster-logs/<db-cluster-id>/driver/ -name "log4j""-""*"".gz" -exec rm -f {} \;
Thanks & Regards,
Sugumar Srinivasan.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2022 11:20 AM
Hi @Sugumar Srinivasan ,
Just a friendly follow-up. Are you still having issues with your init script or not any more? Please let us know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2022 09:09 AM
Hello @Sugumar Srinivasan Could you please enable cluster log delivery and inspect the INIT script logs in the below path dbfs:/cluster-logs/<clusterId>/init_scripts path.
https://docs.databricks.com/clusters/configure.html#cluster-log-delivery-1

