I was wondering if someone could help us with implementation here. Our current program will spin up 5 jobs through the Databricks API using the same Databricks cluster but each one needs their own spark context (specifically each one will connect to a different region in AWS). The jobs are run in parallel, but it seems that some jobs will fail because it cannot find the bucket. I'm pretty sure what is happening is they're pulling the sparkcontext from the driver where it was initialized by another job instead of using the spark context we configured for that specific job. By rerunning the failed job, it will find the bucket and pass.
Any ideas on what we can do to either force the job to use a new spark context (instead of getorcreate()), a different cluster configuration, etc.? Thanks!