cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

should have the option to mark succeeded with failures as a failure rather than a success

kenmyers-8451
Contributor

Hi we are having an issue with the way succeeded with failures is handled. We will get emails telling us that we have a failure, which is correct, but then the pipeline actually treats it like a success and keeps going, but actually we would like to report the whole process as a failure. We have a pretty nested series of workflows that looks like this 

- main workflow
  |- sub workflow
      |- sub sub workflow
          |- some failing task that isn't a leaf/terminating node

Now here's the same structure above but with notes about our issues with the state (this looked messy hence putting just the structure above):

- main workflow [email says "success", state in databricks says "succeeded", but this is wrong because sub workflow should really be classified as a failure, not a success]
  |- sub workflow [email says "success", state in databricks says "succeeded", but this is wrong because sub sub workflow should really be classified as a failure, not a success]
      |- sub sub workflow [email sent says "failure", state in databricks says "succeeded with failures", ideally this should probably just be a failure]
          |- some failing task that isn't a leaf/terminating node [marked as failure]

So in summary, it would be ideal to mark a workflow run that has "succeeded with faillures" as a failure rather than a succeess. I haven't found many people talking about this but did find someone on reddit talking about the same issue.

2 REPLIES 2

Advika
Databricks Employee
Databricks Employee

Hello @kenmyers-8451!

It's a valid product feedback that would be worth flagging.
In the meantime, to ensure your pipeline run fails when any sub-task fails, you can add a final sentinel task at the end of your workflow. This task should programmatically inspect the state of all prior tasks. If it detects any task that failed or was only partially successful, it can raise an error, causing the overall workflow to fail and trigger accurate notifications.

kenmyers-8451
Contributor

thanks @Advika we'll give that a shot for now

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now