Thanks for the response - Yes we are doing this currently (using interactive cluster), however following are the pointers which are being considered for re-evaluating this approach and arrive at a possible alternative (if possible)
1) Cost difference between Interactive and Job cluster
2) In the Production environment, the following error is being received every now and then -
run failed with error message Context ExecutionContextId(1496834584910869936) is disconnected.. While this error can be received for multiple reasons, cluster resource constraints is one of the main reasons as per the understanding. Hence the thought process is to have individual Job clusters for different jobs, which can be scaled independently, hence this will result in making dedicated resources available for the Jobs rather than shared resources from interactive cluster across all jobs. However it might not be feasible to create many interactive cluster consider the costing, hence using Job cluster can offset some of this cost and help in reducing the overall cost.
Also, the official documentation,
https://docs.databricks.com/workflows/jobs/schedule-jobs.html - does not mention anything clearly about the re-use / termination, but mentions that there will be a slight delay which will be less than 60 seconds. Hence if the cluster needs to be re-created, I don't think it can guarantee only 60 seconds delay.