cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Parallel jobs with individual contexts

cesarc
New Contributor II

I was wondering if someone could help us with implementation here. Our current program will spin up 5 jobs through the Databricks API using the same Databricks cluster but each one needs their own spark context (specifically each one will connect to a different region in AWS). The jobs are run in parallel, but it seems that some jobs will fail because it cannot find the bucket. I'm pretty sure what is happening is they're pulling the sparkcontext from the driver where it was initialized by another job instead of using the spark context we configured for that specific job. By rerunning the failed job, it will find the bucket and pass.

Any ideas on what we can do to either force the job to use a new spark context (instead of getorcreate()), a different cluster configuration, etc.? Thanks! 

2 REPLIES 2

feiyun0112
Honored Contributor

 

you can set up buckets with different credentials, endpoints, and so on.

https://docs.databricks.com/en/connect/storage/amazon-s3.html#per-bucket-configuration 

cesarc
New Contributor II

Thanks for the response! So before hitting the Databricks API, we currently initialize the spark session using the corresponding AWS region the bucket is in, and then we read from it using sparkSession.read().text(s3Path) with the s3 path being a dynamic variable based on the exact directory within the bucket we want to read from. Looking at the article you provided, we use an instance profile to access the buckets, which has permissions to read buckets in all regions. Wouldn't that theoretically be enough, since it will read from the correct bucket most of the time? (and when it fails, rerunning it independently does succeed). It seems there's just a conflict in sparkcontexts when multiple jobs are running simultaneously in the same cluster

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group