Hey @ErikApption ,
Maybe I am wrong but I will give you my opinion.
Each time you execute dbutils.notebook.run(), it launches a new and independent execution within the same cluster. So, if you run the cell today and then run it again tomorrow, there should be two separate ephemeral executions.
However, the issue is that Databricks does not treat these as persistent jobsโthey are temporary executions that are not stored permanently.
Databricks automatically deletes old executions
โขEven though new executions are created each time you run the notebook, Databricks automatically deletes old executions based on its retention policies.
โขIf a process tries to access a past execution that has already been deleted, the error โNotebook runs not found due to retention limitsโ will appear.
Timeout or Expiry Constraints
โขdbutils.notebook.run() allows a timeout_seconds parameter, but if Databricks is down for more than 10 minutes, the run fails regardless of timeout settings
โขIf no timeout is set (timeout_seconds=0), the notebook must still complete within 30 days, otherwise, it gets removed.
Job Execution History Not Tracked
โข Unlike standard Databricks Jobs, which persist in the Jobs UI, notebook runs triggered via dbutils.notebook.run() do not retain long-term execution logs.
โขOnce the metadata is deleted, Databricks can no longer retrieve details about past executions, leading to this error.
Hope have given you an idea ๐
Isi