Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-09-2015 09:38 AM
Looks like the following property is pretty high, which consumes a lot of memory on your executors when you cache the dataset.
"spark.storage.memoryFraction:0.9"
This could likely be solved by changing the configuration. Take a look at the upstream tuning docs: