05-01-2022 10:37 PM
Hi,
We have a scenario where we need to deploy 15 spark streaming applications on databricks reading from kafka to single Job cluster.
We tried following approach:
1. create job 1 with new job cluster (C1)
2. create job2 pointing to C1
...
3. create job15 pointing to C1
But, the problem here is if the job 1 fails, it is terminating all the other 14 jobs.
One of the options we are thinking is to have a ***** kafka topic with no messages in it and ***** spark streaming job reading from ***** kafka topic (which will never fail 99.99%) which create new job cluster (C1) and rest of the 15 jobs will point to C1. We are assuming Job cluster C1 will never fail 99.99%.
Other solution we have is to create each job cluster for each job (15 Clusters for 15 jobs ) but it is going to kill our operational costs as it is continuous streaming job and some of topics have very less volume.
Could you please advice on how to address this issue.
Thanks
Jin.
05-02-2022 12:14 AM
@Jin Kim ,
05-02-2022 12:14 AM
@Jin Kim ,
05-04-2022 02:53 AM
@Hubert Dudek , thanks a lot for responding.
05-12-2022 02:00 AM
Hi @Jin Kim , Are you aware of the Workflows with jobs ? Please go through the doc.
Databricks manages the task orchestration, cluster management, monitoring, and error reporting for all of your jobs. You can run your jobs immediately or periodically through an easy-to-use scheduling system.
ALSO,
You can define the order of execution of tasks in a job using the Depends on drop-down. You can set this field to one or more tasks in the job.
Configuring task dependencies creates a Directed Acyclic Graph (DAG) of task execution, a common way of representing execution order in job schedulers. For example, consider the following job consisting of four tasks:
Databricks runs upstream tasks before running downstream tasks, running as many of them in parallel as possible. The following diagram illustrates the order of processing for these tasks:
05-18-2022 05:59 AM
Hi @Jin Kim, Just a friendly follow-up. Do you still need help, or the above responses help you to find the solution? Please let us know.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group