- 440 Views
- 0 replies
- 1 kudos
We needed job_id and run_id in a custom metrics Delta table so we could join to `system.lakeflow.job_run_timeline`. Tried four approaches before finding the one that works on serverless compute.What doesn't workspark.conf.get("spark.databricks.job.id...
- 440 Views
- 0 replies
- 1 kudos
- 396 Views
- 1 replies
- 2 kudos
Part 2 of 3 — Databricks Streaming ArchitectureThe instinct after Part 1 was obvious.If running eight queries in one task means one failure can hide while others keep running — split them into multiple tasks. Separate concerns. Give each component it...
- 396 Views
- 1 replies
- 2 kudos
Latest Reply
Part 1: Streaming Failure Models: Why "It Didn't Crash" Is the Worst OutcomePart 3: One Cluster per Task — Proven, Ready, and Waiting