7 hours ago
In the January 2026 release notes, it was announced that: "Pipelines now support queued execution mode, where multiple update requests are automatically queued and executed sequentially instead of failing with conflicts. This simplifies operations for pipelines with frequent update triggers and eliminates the need for manual retry coordination."
However, I am still seeing concurrent runs fail with "RUN_EXECUTION_ERROR: Pipeline update already in progress." I also don't see an option to apply this queue setting in the UI, nor any documentation for this within DAB. I tried setting this in DAB with `queue: enabled: true` that is set for a job, but this does not work.
Has the pipeline queue been working for anyone else?
6 hours ago
What’s going on
StartUpdate requests or loses the response, causing RUN_EXECUTION_ERROR: Pipeline update already in progress even when only one update is actually running; see ES-1635313 and follow-ons, and Nokia BL-16616 / SUP-27441 where the pipeline completes but the job task shows this error.queue.enabled: true) you tried in DAB is a Jobs-level run queue, not the DLT/Lakeflow pipeline queuing feature, so toggling that won’t change the pipeline control-plane behavior.Below are 3 options
Mitigate by ensuring there is exactly one place that can trigger the pipeline and that it never overlaps runs:
max_concurrent_runs = 1, orjobs:
my_pipeline_job:
queue:
enabled: true
StartUpdate.This won’t fix the known Jobs bug cases where one job run issues duplicate StartUpdate, but it does remove most genuine concurrency conflicts.
If your pattern is “job task sometimes fails with RUN_EXECUTION_ERROR but the underlying pipeline update actually succeeded” (as in ES-1635313 / Nokia cases), treat this as a transient integration bug:
max_retries on the pipeline task itself so the job auto-retries when it sees this specific error.go/dlt/debug / event log that only one update ran and completed).This doesn’t give you true queuing semantics, but it makes the symptom operationally tolerable until the backend rollout and Jobs fixes are fully in place.
Given you’re still seeing RUN_EXECUTION_ERROR post-announcement and there are active incidents (ES-1635313, BL-16616/SUP-27441) specifically around this error and queued/duplicate StartUpdate behavior:
RUN_EXECUTION_ERROR: Pipeline update already in progress despite queued execution being announced; please check against ES-1635313 / BL-16616 behavior.”max_concurrent_runs or job queue) and, if needed, Option 2 (retries) as immediate mitigations.Recommendation:
Use Option 3 as the primary path so engineering can confirm whether your workspace/pipelines are on the queued-execution rollout and attach you to the ongoing fixes, and in the meantime implement Option 1 (single orchestrator + serialization) plus light retries from Option 2 to reduce operational pain.
6 hours ago
What’s going on
StartUpdate requests or loses the response, causing RUN_EXECUTION_ERROR: Pipeline update already in progress even when only one update is actually running; see ES-1635313 and follow-ons, and Nokia BL-16616 / SUP-27441 where the pipeline completes but the job task shows this error.queue.enabled: true) you tried in DAB is a Jobs-level run queue, not the DLT/Lakeflow pipeline queuing feature, so toggling that won’t change the pipeline control-plane behavior.Below are 3 options
Mitigate by ensuring there is exactly one place that can trigger the pipeline and that it never overlaps runs:
max_concurrent_runs = 1, orjobs:
my_pipeline_job:
queue:
enabled: true
StartUpdate.This won’t fix the known Jobs bug cases where one job run issues duplicate StartUpdate, but it does remove most genuine concurrency conflicts.
If your pattern is “job task sometimes fails with RUN_EXECUTION_ERROR but the underlying pipeline update actually succeeded” (as in ES-1635313 / Nokia cases), treat this as a transient integration bug:
max_retries on the pipeline task itself so the job auto-retries when it sees this specific error.go/dlt/debug / event log that only one update ran and completed).This doesn’t give you true queuing semantics, but it makes the symptom operationally tolerable until the backend rollout and Jobs fixes are fully in place.
Given you’re still seeing RUN_EXECUTION_ERROR post-announcement and there are active incidents (ES-1635313, BL-16616/SUP-27441) specifically around this error and queued/duplicate StartUpdate behavior:
RUN_EXECUTION_ERROR: Pipeline update already in progress despite queued execution being announced; please check against ES-1635313 / BL-16616 behavior.”max_concurrent_runs or job queue) and, if needed, Option 2 (retries) as immediate mitigations.Recommendation:
Use Option 3 as the primary path so engineering can confirm whether your workspace/pipelines are on the queued-execution rollout and attach you to the ongoing fixes, and in the meantime implement Option 1 (single orchestrator + serialization) plus light retries from Option 2 to reduce operational pain.
5 hours ago
Thank you very much for the detailed response! We unfortunately can't proceed with option 1, as we do require multiple places that can trigger the pipeline (an API call to the parent job, and a direct API call to the pipeline itself). This is due to the specific configurable options available in a pipeline API call vs a job API call, namely `full_refresh_selection` to fully refresh specific tables.
We do have queue enabled at the job level and a small `max_retries` on the pipeline.
For now it seems we will need to open a ticket and wait until the pipeline execution queue is fully rolled out.