Parallel jobs with individual contexts
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2024 09:08 AM
I was wondering if someone could help us with implementation here. Our current program will spin up 5 jobs through the Databricks API using the same Databricks cluster but each one needs their own spark context (specifically each one will connect to a different region in AWS). The jobs are run in parallel, but it seems that some jobs will fail because it cannot find the bucket. I'm pretty sure what is happening is they're pulling the sparkcontext from the driver where it was initialized by another job instead of using the spark context we configured for that specific job. By rerunning the failed job, it will find the bucket and pass.
Any ideas on what we can do to either force the job to use a new spark context (instead of getorcreate()), a different cluster configuration, etc.? Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2024 04:58 PM
you can set up buckets with different credentials, endpoints, and so on.
https://docs.databricks.com/en/connect/storage/amazon-s3.html#per-bucket-configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-13-2024 09:25 AM
Thanks for the response! So before hitting the Databricks API, we currently initialize the spark session using the corresponding AWS region the bucket is in, and then we read from it using sparkSession.read().text(s3Path) with the s3 path being a dynamic variable based on the exact directory within the bucket we want to read from. Looking at the article you provided, we use an instance profile to access the buckets, which has permissions to read buckets in all regions. Wouldn't that theoretically be enough, since it will read from the correct bucket most of the time? (and when it fails, rerunning it independently does succeed). It seems there's just a conflict in sparkcontexts when multiple jobs are running simultaneously in the same cluster

