How are parallel and subsequent jobs handled by cluster?
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2023 03:59 AM
Hello,
Apologize for dumb question but i'm new to Databricks and need clarification on following.
Are parallel and subsequent jobs able to reuse the same compute resources to keep time and cost overhead as low as possible vs. are they spinning a new cluster all the time?
Regards,
Tanja
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2023 04:55 AM
@tanja.savic tanja.savic
You can use shared job cluster:
https://docs.databricks.com/workflows/jobs/jobs.html#use-shared-job-clusters
But remember that a shared job cluster is scoped to a single job run, and cannot be used by other jobs or runs of the same job. It means that 1 job = 1 cluster

