Hi, @KrzysztofPrzyso Thanks for sharing your concern here.
The Shared Jobs Cluster feature in Databricks is specifically designed for tasks within the same job run and is not intended to be shared across different jobs or runs of the same job. This feature is designed to optimize resource usage within a single job run, allowing multiple tasks in the same job run to reuse the cluster. As such, it may not be feasible to utilize the Shared Jobs Cluster feature in an external orchestrator like Azure Data Factory (ADF) or Synapse Workspace to reduce startup time, compute cost, and reuse or cache some data across different job runs.
But if you want to save startup time, reduction of compute cost for the underlying VM and possibly reuse/caching some data on Azure Data Factory, while creating a Databricks link service, you can select the existing interactive cluster or existing instance pool. so for the next task/ job in the run will re-use the same cluster if you have multiple sequences of tasks/jobs.
https://learn.microsoft.com/en-us/azure/data-factory/solution-template-databricks-notebook#:~:text=A....
Please have a like if it is helpful for you. Follow-ups are appreciated.
Kudos,
Sai Kumar