cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Parallel jobs with individual contexts

cesarc
New Contributor II

I was wondering if someone could help us with implementation here. Our current program will spin up 5 jobs through the Databricks API using the same Databricks cluster but each one needs their own spark context (specifically each one will connect to a different region in AWS). The jobs are run in parallel, but it seems that some jobs will fail because it cannot find the bucket. I'm pretty sure what is happening is they're pulling the sparkcontext from the driver where it was initialized by another job instead of using the spark context we configured for that specific job. By rerunning the failed job, it will find the bucket and pass.

Any ideas on what we can do to either force the job to use a new spark context (instead of getorcreate()), a different cluster configuration, etc.? Thanks! 

3 REPLIES 3

feiyun0112
New Contributor III

 

you can set up buckets with different credentials, endpoints, and so on.

https://docs.databricks.com/en/connect/storage/amazon-s3.html#per-bucket-configuration 

cesarc
New Contributor II

Thanks for the response! So before hitting the Databricks API, we currently initialize the spark session using the corresponding AWS region the bucket is in, and then we read from it using sparkSession.read().text(s3Path) with the s3 path being a dynamic variable based on the exact directory within the bucket we want to read from. Looking at the article you provided, we use an instance profile to access the buckets, which has permissions to read buckets in all regions. Wouldn't that theoretically be enough, since it will read from the correct bucket most of the time? (and when it fails, rerunning it independently does succeed). It seems there's just a conflict in sparkcontexts when multiple jobs are running simultaneously in the same cluster

Kaniz
Community Manager
Community Manager

Hey there! Thanks a bunch for being part of our awesome community! 🎉 

We love having you around and appreciate all your questions. Take a moment to check out the responses – you'll find some great info. Your input is valuable, so pick the best solution for you. And remember, if you ever need more help , we're here for you! 

Keep being awesome! 😊🚀

 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.