โ02-11-2026 01:20 AM
How can we configure a job in a different Azure application to be triggered after the completion of an Azure Databricks job? Once the Databricks job is successful, the job in the third-party application hosted in Azure should start. I attempted to use the default webhook notification available in Databricks, which performs an HTTP POST, but I couldn't find useful information regarding the job in the RequestBody of the WebhookData parameter. Do you have any suggestions?
โ02-11-2026 04:41 AM
Greetings @PradeepPrabha , I did some digging and here is what I found.
Short answer: Use Databricks job notifications with an HTTP webhook that points to a lightweight receiver in Azure (for example, an Azure Function or a Logic App). The webhook payload includes workspace_id, job_id, and run_id. Your receiver uses run_id to call the Databricks Jobs API, fetch full run details, and then trigger the downstream job. Make sure the receiver returns a 2xx response quickly to avoid retries and duplicate events.
Recommended pattern (event-driven)
Configure a system destination (one-time, admin only)
In Admin Settings โ Notifications โ Add destination, choose Webhook (or Slack, Teams, PagerDuty if that fits your use case). For webhooks, you can configure basic auth. Databricks enforces HTTPS and requires certificates from a trusted CA.
Attach the destination to your job
Open the job, go to Job notifications, and add notifications for Start, Success, and/or Failure. Select the destination you created. You can configure up to three destinations per event.
Build a simple receiver in Azure (Function or Logic App)
Parse the incoming JSON payload to extract workspace_id, job_id, and run_id.
Use run_id to call Jobs Runs Get (and optionally Runs Get Output) to retrieve status, task-level details, and error messages.
Trigger your third-party job using its API.
Return HTTP 2xx within roughly five seconds and offload any heavier work asynchronously. If you donโt, Databricks will retry the notification and youโll often see two or three duplicates on failures. Add idempotency logic keyed on run_id to safely dedupe.
Example payload (start, success, or failure):
{
"event_type": "jobs.on_success",
"workspace_id": "your_workspace_id",
"run": { "run_id": "12345" },
"job": { "job_id": "67890", "name": "job_name" }
}
Tip
If you need richer context than what the webhook provides, the supported approach today is to โfan outโ using run_id and job_id via the Jobs API. Fully custom webhook payloads arenโt supported for job notifications, so enrichment via API calls is the intended pattern.
Common gotchas
Networking and allowlisting: If you restrict inbound traffic, make sure Databricks control plane IPs used for notifications are allowlisted. Several โsilent delivery failureโ cases boil down to this.
Slack or Teams formatting: Donโt build logic that depends on message structure. If you need a stable schema, use a generic webhook and enrich the payload yourself via the Jobs API.
Alternatives
Call the third-party API directly from the Databricks job on success, for example as a final notebook task using requests. This is often the simplest approach if you control the job code.
Use an external orchestrator for cross-system dependencies. Azure Data Factory can run Databricks jobs and then continue downstream in the same pipeline. Apache Airflow has first-class Databricks operators and can schedule follow-on work once a Databricks run completes.
Why the webhook felt โemptyโ
This is by design. Databricks keeps the job notification payload intentionally minimalโevent_type, workspace_id, job_id, and run_id. The expectation is that you use those identifiers to query the Jobs API for full context (tasks, timings, errors) and then dispatch whatever downstream action you need.
Hope this helps, Louis.
5 hours ago
Thank you.
Thank you for the detailed answer!
I have tested the Azure function way and also using an Azure runbook as well. Both works fine.
Also tested the option of adding as the final task and a condition to "if all other notebooks" successful, then proceed and then wrote an HTTP POST to an Azure function
โ02-11-2026 04:41 AM
Greetings @PradeepPrabha , I did some digging and here is what I found.
Short answer: Use Databricks job notifications with an HTTP webhook that points to a lightweight receiver in Azure (for example, an Azure Function or a Logic App). The webhook payload includes workspace_id, job_id, and run_id. Your receiver uses run_id to call the Databricks Jobs API, fetch full run details, and then trigger the downstream job. Make sure the receiver returns a 2xx response quickly to avoid retries and duplicate events.
Recommended pattern (event-driven)
Configure a system destination (one-time, admin only)
In Admin Settings โ Notifications โ Add destination, choose Webhook (or Slack, Teams, PagerDuty if that fits your use case). For webhooks, you can configure basic auth. Databricks enforces HTTPS and requires certificates from a trusted CA.
Attach the destination to your job
Open the job, go to Job notifications, and add notifications for Start, Success, and/or Failure. Select the destination you created. You can configure up to three destinations per event.
Build a simple receiver in Azure (Function or Logic App)
Parse the incoming JSON payload to extract workspace_id, job_id, and run_id.
Use run_id to call Jobs Runs Get (and optionally Runs Get Output) to retrieve status, task-level details, and error messages.
Trigger your third-party job using its API.
Return HTTP 2xx within roughly five seconds and offload any heavier work asynchronously. If you donโt, Databricks will retry the notification and youโll often see two or three duplicates on failures. Add idempotency logic keyed on run_id to safely dedupe.
Example payload (start, success, or failure):
{
"event_type": "jobs.on_success",
"workspace_id": "your_workspace_id",
"run": { "run_id": "12345" },
"job": { "job_id": "67890", "name": "job_name" }
}
Tip
If you need richer context than what the webhook provides, the supported approach today is to โfan outโ using run_id and job_id via the Jobs API. Fully custom webhook payloads arenโt supported for job notifications, so enrichment via API calls is the intended pattern.
Common gotchas
Networking and allowlisting: If you restrict inbound traffic, make sure Databricks control plane IPs used for notifications are allowlisted. Several โsilent delivery failureโ cases boil down to this.
Slack or Teams formatting: Donโt build logic that depends on message structure. If you need a stable schema, use a generic webhook and enrich the payload yourself via the Jobs API.
Alternatives
Call the third-party API directly from the Databricks job on success, for example as a final notebook task using requests. This is often the simplest approach if you control the job code.
Use an external orchestrator for cross-system dependencies. Azure Data Factory can run Databricks jobs and then continue downstream in the same pipeline. Apache Airflow has first-class Databricks operators and can schedule follow-on work once a Databricks run completes.
Why the webhook felt โemptyโ
This is by design. Databricks keeps the job notification payload intentionally minimalโevent_type, workspace_id, job_id, and run_id. The expectation is that you use those identifiers to query the Jobs API for full context (tasks, timings, errors) and then dispatch whatever downstream action you need.
Hope this helps, Louis.
5 hours ago
Thank you.
Thank you for the detailed answer!
I have tested the Azure function way and also using an Azure runbook as well. Both works fine.
Also tested the option of adding as the final task and a condition to "if all other notebooks" successful, then proceed and then wrote an HTTP POST to an Azure function