In Databricks workflows, the "if-else" condition and depends_on logic do not behave exactly like standard programming if-else statements. If a task depends on another task's outcome and that outcome does not match (for example, the condition is false when the workflow expects true for the dependency), then the dependent task will not run. This is expected behavior, but not the same as having alternative branches like in regular code, where you can control what happens on both success and failure branches.
Workflow Logic in Databricks
-
The depends_on statement with outcome creates conditional dependencies between tasks.
-
If the dependency condition is not met, Databricks does not run the dependent task; it skips it, but the overall job does not fail unless you design it that way.
-
However, if your workflow branches only on true (or only on false), tasks that depend on the opposite outcome will never run.
-
Databricks workflows do not automatically switch control to an "else" branch. You must explicitly model both paths.
Why Is Task Skipped?
-
In your example, get_email_infos won't run if all its depends_on outcomes aren't satisfied.
-
If you want either check_type_of_trigger = "true" or check_status_to_schedule = "false" to run the task, you must split the workflow to create separate tasks for each condition.
-
If you want to have true "if-else" branching, you need both "true" and "false" outcomes, each leading to their respective downstream tasks.
How to Model True If-Else Branches
-
Use separate downstream tasks for each outcome, rather than combining them in a single task's dependency list.
-
For exclusive branching, introduce dummy or passthrough tasks to ensure each path is handled and only one runs based on the condition.
Key Points
-
Databricks conditional dependencies are not true "if-else"; they are gating mechanisms.
-
If a condition is unmet, the job does not automatically failโit just skips downstream tasks gated by those dependencies.
-
To model "if-else" logic, split your flow so each branch has its own tasks and conditions.
For more details, see this Databricks forum discussion and Databricks documentation on Workflow dependencies.