by
owen1
• New Contributor
- 675 Views
- 2 replies
- 2 kudos
I set the workflow to run at 12:00 every day in the workflow, but the workflow failed with the error message below, and I don't know why.Run result unavailable: run failed with error message Unexpected failure while waiting for the cluster (0506-0233...
- 675 Views
- 2 replies
- 2 kudos
Latest Reply
Hello @Sangwoo Lee​ ,As mentioned by vignesh, it seems like an infra related issue. > Does the user (which executes the job) has access to start a cluster?> Incase if it is not an access issue and Incase if you are starting a lot of workflow jobs tog...
1 More Replies
by
Fred_F
• New Contributor III
- 4370 Views
- 7 replies
- 5 kudos
Hi there,​I've a batch process configured in a workflow which fails due to a jdbc timeout on a Postgres DB.​I checked the JDBC connection configuration and it seems to work when I query a table and doing a df.show() in the process and it displays th...
- 4370 Views
- 7 replies
- 5 kudos
Latest Reply
Hi @Fred Foucart​, We haven’t heard from you since the last response from @Rama Krishna N​ , and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community, as it can be helpful to ...
6 More Replies
by
mmlime
• New Contributor III
- 1360 Views
- 4 replies
- 0 kudos
Hi,there is no option to take VMs from a Pool for a new workflow (Azure Cloud)?default schema for a new cluster:{
"num_workers": 0,
"spark_version": "10.4.x-scala2.12",
"spark_conf": {
"spark.master": "local[*, 4]",
"spark...
- 1360 Views
- 4 replies
- 0 kudos
Latest Reply
@Michal Mlaka​ I just checked on the UI and I could find the pools listing under worker type in a job cluster configuration. It should work.
3 More Replies