I'm following this post to run an init script to install gdal.
My script is simply:
Hi @meystingray, The script is timing out because it has been running too long.
apt-get update and
apt-get upgrade commands are updating and upgrading the system packages, which can take a significant amount of time depending on the updates’ size and the network connection's speed.
To fix this issue, you can try the following steps:
1. Split the script into two separate commands:
2. Run the commands individually to see if they are causing the timeout. If one of the commands is taking too long, you can try to optimize it or find an alternative solution
3. If the
apt-get upgrade The command is causing the timeout; you can try to limit the number of packages being upgraded by specifying the package names explicitly.
For example, instead of using
apt-get upgrade, you can use
apt-get install package1 package2 package3 to upgrade only the specified packages.
4. You can also try increasing the timeout duration by modifying the timeout settings in your cluster configuration. This can be done by adjusting the
spark.databricks.cluster.profile.timeoutSeconds parameter in the cluster configuration.
Really appreciate the response, @Kaniz. So using your advice, I tried commenting out various parts of the script, but kept getting an error of the type: "Init script failure:
Cluster scoped init script /Users/XYZ/gdal_install.sh failed: Script exit status is non-zero"
The only thing that worked was:
I'm trying to install gdal on a Databricks Runtime Version 13.2 ML. Thanks, Sean
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!