@Dipesh Yogi - Please refer to the current behavior.
when you schedule workflow dependencies and configure each job has a task with a dependency of task2 to start until task1 complete. subsequent runs will not be triggered with the below message
Task <Task-name> failed. This caused all downstream tasks to get skipped.
Reference - https://learn.microsoft.com/en-us/azure/databricks/workflows/jobs/jobs#--task-dependencies.
The below documentation also explains the Repair and rerun feature of the workflows which address your specific scenario but only at the individual run level.
https://learn.microsoft.com/en-us/azure/databricks/workflows/jobs/how-to-fix-job-failures
Unfortunately, There is no mechanism currently to pause the workflow schedules after the first failure. However, you can create alerts to your email on the failure and upon receiving the alerts, manually stop the schedule. we will work internally on this new feature request to pause the schedule and it will be picked up based on the prioritization. Thanks for bringing this up!!!