When a Databricks job is configured to run with a job cluster in continuous mode, the cluster will be kept alive between job runs and reused for subsequent runs.
The cluster will not be terminated and recreated between each run as this would defeat the purpose of running the job in continuous mode, which is designed to reduce job startup time and increase the efficiency of the cluster usage.
Instead, Databricks will keep the cluster alive and attempt to assign subsequent job runs to the same cluster to avoid the costs and delay of launching a new cluster each time. There may be slight variations in startup time between subsequent runs due to factors like node availability, but the delay should be less than 60 seconds in most cases.
In your specific case, if you are observing that a simple do-nothing notebook is taking around 2 minutes to complete and it is unclear whether the same cluster is being used each time, it's possible that there may be other factors impacting the cluster performance (e.g., cluster configuration, node availability, etc.) or resource usage (e.g., other running jobs) that are contributing to the delay.
I would recommend reviewing the Databricks job logs and cluster utilization metrics to get a better understanding of the job's performance and resource usage over time, and if you continue to experience issues, consider reaching out to Databricks support for further assistance.