<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Automating the re run of job (with several Tasks) // automate the notification of a failed specific tasks after re trying // Error handling on azure data factory pipeline with DataBricks notebook in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/automating-the-re-run-of-job-with-several-tasks-automate-the/m-p/11038#M6079</link>
    <description>&lt;P&gt;Hi DataBricks Experts:&lt;/P&gt;&lt;P&gt;I'm using Databricks on Azure.... I'd like to understand the following:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1) if there is way of &lt;B&gt;automating &lt;/B&gt;the re run some specific failed tasks from a job (with several Tasks), for example if I have 4 tasks, and the task 1 and 2 have succeed and task 3 and 4 have failed, then to be able to re run task 3 and 4 one more time... I know there is a functionality per se inside Jobs, that allows to re run failed tasks but this needs to be done manually; and I want to automate this... here more info: &lt;A href="https://docs.databricks.com/data-engineering/jobs/jobs.html" target="test_blank"&gt;https://docs.databricks.com/data-engineering/jobs/jobs.html&lt;/A&gt; (Repair an unsuccessful job run)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2) and if after retrying and still failing some tasks... then do some kind of &lt;B&gt;notification  invoking&lt;/B&gt; a &lt;B&gt;https url&lt;/B&gt; for deciding what to do next after retrying 2 times. I've seen this documentation (about retrying several times on a notebook): &lt;A href="https://docs.databricks.com/notebooks/notebook-workflows.html?_ga=2.96427868.191663080.1659650759-1341674114.1658447300" target="test_blank"&gt;https://docs.databricks.com/notebooks/notebook-workflows.html?_ga=2.96427868.191663080.1659650759-1341674114.1658447300&lt;/A&gt;. &lt;/P&gt;&lt;P&gt;I've seen this reference and it seems only email is allowed as notification method for failed job: &lt;A href="https://stackoverflow.com/questions/61586505/azure-databricks-job-notification-email" target="test_blank"&gt;https://stackoverflow.com/questions/61586505/azure-databricks-job-notification-email&lt;/A&gt; ... has this changed?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Additionally:&lt;/P&gt;&lt;P&gt;3) not sure if there is a best practice for &lt;B&gt;orchestrating notebooks from Azure Data Factory&lt;/B&gt; and manage this type of problems from Azure Data Factory? I've seen this documentation: &lt;A href="https://azure.microsoft.com/es-mx/blog/operationalize-azure-databricks-notebooks-using-data-factory/" target="test_blank"&gt;https://azure.microsoft.com/es-mx/blog/operationalize-azure-databricks-notebooks-using-data-factory/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Where it seems that if a notebook failed then this can be catch up on DataFactory pipeline, and manage the error there (for example sending an email).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help will be appreciated.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Diego&lt;/P&gt;</description>
    <pubDate>Sat, 06 Aug 2022 01:02:47 GMT</pubDate>
    <dc:creator>Diego_MSFT</dc:creator>
    <dc:date>2022-08-06T01:02:47Z</dc:date>
    <item>
      <title>Automating the re run of job (with several Tasks) // automate the notification of a failed specific tasks after re trying // Error handling on azure data factory pipeline with DataBricks notebook</title>
      <link>https://community.databricks.com/t5/data-engineering/automating-the-re-run-of-job-with-several-tasks-automate-the/m-p/11038#M6079</link>
      <description>&lt;P&gt;Hi DataBricks Experts:&lt;/P&gt;&lt;P&gt;I'm using Databricks on Azure.... I'd like to understand the following:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1) if there is way of &lt;B&gt;automating &lt;/B&gt;the re run some specific failed tasks from a job (with several Tasks), for example if I have 4 tasks, and the task 1 and 2 have succeed and task 3 and 4 have failed, then to be able to re run task 3 and 4 one more time... I know there is a functionality per se inside Jobs, that allows to re run failed tasks but this needs to be done manually; and I want to automate this... here more info: &lt;A href="https://docs.databricks.com/data-engineering/jobs/jobs.html" target="test_blank"&gt;https://docs.databricks.com/data-engineering/jobs/jobs.html&lt;/A&gt; (Repair an unsuccessful job run)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2) and if after retrying and still failing some tasks... then do some kind of &lt;B&gt;notification  invoking&lt;/B&gt; a &lt;B&gt;https url&lt;/B&gt; for deciding what to do next after retrying 2 times. I've seen this documentation (about retrying several times on a notebook): &lt;A href="https://docs.databricks.com/notebooks/notebook-workflows.html?_ga=2.96427868.191663080.1659650759-1341674114.1658447300" target="test_blank"&gt;https://docs.databricks.com/notebooks/notebook-workflows.html?_ga=2.96427868.191663080.1659650759-1341674114.1658447300&lt;/A&gt;. &lt;/P&gt;&lt;P&gt;I've seen this reference and it seems only email is allowed as notification method for failed job: &lt;A href="https://stackoverflow.com/questions/61586505/azure-databricks-job-notification-email" target="test_blank"&gt;https://stackoverflow.com/questions/61586505/azure-databricks-job-notification-email&lt;/A&gt; ... has this changed?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Additionally:&lt;/P&gt;&lt;P&gt;3) not sure if there is a best practice for &lt;B&gt;orchestrating notebooks from Azure Data Factory&lt;/B&gt; and manage this type of problems from Azure Data Factory? I've seen this documentation: &lt;A href="https://azure.microsoft.com/es-mx/blog/operationalize-azure-databricks-notebooks-using-data-factory/" target="test_blank"&gt;https://azure.microsoft.com/es-mx/blog/operationalize-azure-databricks-notebooks-using-data-factory/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Where it seems that if a notebook failed then this can be catch up on DataFactory pipeline, and manage the error there (for example sending an email).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help will be appreciated.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Diego&lt;/P&gt;</description>
      <pubDate>Sat, 06 Aug 2022 01:02:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/automating-the-re-run-of-job-with-several-tasks-automate-the/m-p/11038#M6079</guid>
      <dc:creator>Diego_MSFT</dc:creator>
      <dc:date>2022-08-06T01:02:47Z</dc:date>
    </item>
    <item>
      <title>Re: Automating the re run of job (with several Tasks) // automate the notification of a failed specific tasks after re trying // Error handling on azure data factory pipeline with DataBricks notebook</title>
      <link>https://community.databricks.com/t5/data-engineering/automating-the-re-run-of-job-with-several-tasks-automate-the/m-p/11039#M6080</link>
      <description>&lt;P&gt;You can use "retries".&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In Workflow, select your job, the task, and in the options below, configure retries.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If so, you can also see more options at:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/pt-br/azure/databricks/dev-tools/api/2.0/jobs?source=recommendations" alt="https://learn.microsoft.com/pt-br/azure/databricks/dev-tools/api/2.0/jobs?source=recommendations" target="_blank"&gt;https://learn.microsoft.com/pt-br/azure/databricks/dev-tools/api/2.0/jobs?source=recommendations&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Apr 2023 18:55:30 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/automating-the-re-run-of-job-with-several-tasks-automate-the/m-p/11039#M6080</guid>
      <dc:creator>Lindberg</dc:creator>
      <dc:date>2023-04-20T18:55:30Z</dc:date>
    </item>
  </channel>
</rss>

