Hi @Pratibha, To set a timeout limit for a job, you can use the timeout parameter in your job configuration file. This parameter sets the maximum duration for a task. If the task does not complete within this time, the jobโs status is set to "Timed Out".
For retrying a job after a timeout, you can configure a retry policy for the task in your job configuration. This policy determines when and how many times failed task runs are retried.
However, if you want the job to retry even after a timeout error, and you want the Databricks launched status to show โretry by schedulerโ, you might need to handle this programmatically. You could write a script that checks the status of the job and if itโs โTimed Outโ, it could trigger the job again after a specified interval (min_retry_interval_millis).
If youโre using Kubernetes, you might find the activeDeadlineSeconds parameter useful. If you have further questions or need more specific advice, feel free to ask!