Hi @Sadam97 ,
This seems to be expected behaviour.
If you are running the jobs in a job cluster:
In job clusters, the Databricks job scheduler treats all streaming queries within a task as belonging to the same job execution context. If any query fails, the overall job is marked as failed and all queries are stopped, aiming to avoid partial or inconsistent updates in automated workflows. For example, if you are having three different streams (e.g., bronze → silver → gold), dependent on each other. If the bronze stream fails with an error, the silver and gold streams won't have any new data to process, and the cluster will be ideal without processing any data, which is not expected.Therefore, this is an expected behaviour with regard to the job cluster as per design.
If you are running the jobs using interactive cluster then here queries are managed by individual notebook sessions, enabling isolated failure and restarts for each query wont affect others.
Recommendation
- For Production: If you need true isolation, schedule streaming queries as separate tasks within a job as a workflow, or use distinct clusters. This avoids failures.
- For Development: Interactive clusters provide more flexibility for multi-query execution.