cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

owen1
by New Contributor
  • 1300 Views
  • 2 replies
  • 2 kudos

workflow cluster was create error

I set the workflow to run at 12:00 every day in the workflow, but the workflow failed with the error message below, and I don't know why.Run result unavailable: run failed with error message Unexpected failure while waiting for the cluster (0506-0233...

  • 1300 Views
  • 2 replies
  • 2 kudos
Latest Reply
Murthy1
Contributor II
  • 2 kudos

Hello @Sangwoo Lee​ ,As mentioned by vignesh, it seems like an infra related issue. > Does the user (which executes the job) has access to start a cluster?> Incase if it is not an access issue and Incase if you are starting a lot of workflow jobs tog...

  • 2 kudos
1 More Replies
Fred_F
by New Contributor III
  • 7800 Views
  • 5 replies
  • 5 kudos

JDBC connection timeout on workflow cluster

Hi there,​I've a batch process configured in a workflow which fails due to a jdbc timeout on a Postgres DB.​I checked the JDBC connection configuration and it seems to work when I query a table and doing a df.show() in the process and it displays th...

  • 7800 Views
  • 5 replies
  • 5 kudos
Latest Reply
RKNutalapati
Valued Contributor
  • 5 kudos

HI @Fred Foucart​ ,The above code looks good to me. Can you try with below code as well.spark.read\  .format("jdbc") \  .option("url", f"jdbc:postgresql://{host}/{database}") \  .option("driver", "org.postgresql.Driver") \  .option("user", username) ...

  • 5 kudos
4 More Replies
mmlime
by New Contributor III
  • 2846 Views
  • 4 replies
  • 0 kudos

Resolved! Can I use VMs from Pool for my Workflow cluster?

Hi,there is no option to take VMs from a Pool for a new workflow (Azure Cloud)?default schema for a new cluster:{ "num_workers": 0, "spark_version": "10.4.x-scala2.12", "spark_conf": { "spark.master": "local[*, 4]", "spark...

  • 2846 Views
  • 4 replies
  • 0 kudos
Latest Reply
Vivian_Wilfred
Databricks Employee
  • 0 kudos

@Michal Mlaka​ I just checked on the UI and I could find the pools listing under worker type in a job cluster configuration. It should work.

  • 0 kudos
3 More Replies
Labels