@SB93
The error message you are seeing indicates that the cluster failed to launch because the Spark driver was unresponsive, with possible causes being library conflicts, incorrect metastore configuration, or other configuration issues. Given that the pipeline worked previously without issues, here are some steps you can take to troubleshoot and resolve the issue:
Troubleshooting Steps:
- Check for recent configuration changes to the pipeline or cluster.
- Review driver logs and cluster logs for additional error details.
- Check for library conflicts and ensure compatibility between libraries and the runtime version.
- Verify the metastore configuration and access permissions.
- Ensure that cluster policies and resource allocation are correct.
- Restart or recreate the cluster and check if the issue persists.