โ10-26-2023 11:00 AM
If I have a job like this that orchestrates N DLT pipelines, what setting do I need to adjust so that they use the same compute resources between steps rather than spinning up and shutting down for each individual pipeline?
โ11-07-2023 08:42 AM
Edit: I'm using databricks asset bundles to deploy both the pipelines and jobs.
โ11-07-2023 11:08 AM
@bradleyjamrozik - under the DLT settings, notebooks can be listed all together. It will deploy a single compute resource for all the tasks.
โ11-07-2023 11:10 AM
The different pipelines point to different catalogs/schemas so they have to be separated out.
โ11-13-2023 10:17 AM
@bradleyjamrozik - I checked internally too. DLT pipelines cannot share resources.
โ07-09-2025 04:03 AM
@shan_chandra Hello, I have the same issue, where I have a job that uses serverless compute and in this job I do some different tasks and then I start a DLT pipeline, which also uses serverless compute and this means that the job again have to wait for a resource to start.
Has anything changed since 2023 and if not, then is it something that will be possible in the future? For my use case it really limits the possibilities, since for the overall job there are now 10 minutes of where resources have to start, which isn't optimal, when this job has to run each hour and has to provide results quickly.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now