On a regular cluster, you can use:
```
spark.sparkContext._jsc.hadoopConfiguration().set(key, value)
```
These values are then available on the executors using the hadoop configuration.
However, on a high concurrency cluster, attempting to do so results in:
> py4j.security.Py4JSecurityException: Method public org.apache.hadoop.conf.Configuration org.apache.spark.api.java.JavaSparkContext.hadoopConfiguration() is not whitelisted on class class org.apache.spark.api.java.JavaSparkContext
Is there a way around this? Or is a limitation of the high concurrency cluster type?
The goal here is to pass tokens that are generate at runtime to the executor, which means that setting the details in the cluster settings (ie. cluster > advanced > spark > spark config) is not suitable.