- 10130 Views
- 4 replies
- 1 kudos
On a regular cluster, you can use:```spark.sparkContext._jsc.hadoopConfiguration().set(key, value)```These values are then available on the executors using the hadoop configuration. However, on a high concurrency cluster, attempting to do so results ...
- 10130 Views
- 4 replies
- 1 kudos
Latest Reply
I am not sure why you are getting that error on a high concurrency cluster. As I am able to set the configuration as you show above. Can you try the following code instead? sc._jsc.hadoopConfiguration().set(key, value)
3 More Replies
- 7714 Views
- 5 replies
- 0 kudos
- 7714 Views
- 5 replies
- 0 kudos
Latest Reply
This is an old post, however, is this still accurate for the latest version of Databricks in 2019? If so, how to approach the following?1. Connect to many MongoDBs.2. Connect to MongoDB when connection string information is dynamic (i.e. stored in s...
4 More Replies