Hi everyone, Iโm facing an issue when running a notebook on a Databricks All-purpose cluster. Some of my cells/pipelines run for a very long time, and I want to automatically cancel/kill them when they exceed a certain time limit.
I tried setting spark.databricks.execution.timeout, but it doesnโt seem to have any effect in my case.
What I need is a timeout mechanism that can cancel the currently running notebook cell, not just a Spark job timeout.
If anyone can share guidance or official documentation references, Iโd really appreciate it. Thanks in advance!