Should I always cache my RDD's and DataFrames?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-24-2015 03:40 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-24-2015 03:40 PM
You should definitely
cache()
RDD's and DataFrames in the following cases:- Reusing them in an iterative loop (ie. ML algos)
- Reuse the RDD multiple times in a single application, job, or notebook.
- When the upfront cost to regenerate the RDD partitions is costly (ie. HDFS, after a complex set of
,map()
, etc.) This helps in the recovery process if a Worker node dies.filter()
Keep in mind that Spark will automatically evict RDD partitions from Workers in an LRU manner. The LRU eviction happens independently on each Worker and depends on the available memory in the Worker.
During the lifecycle of an RDD, RDD partitions may exist in memory or on disk across the cluster depending on available memory.
The Storage tab on the Spark UI shows where partitions exist (memory or disk) across the cluster at any given point in time.
Note that
cache()
is an alias for persist(StorageLevel.MEMORY_ONLY)
which may not be ideal for datasets larger than available cluster memory. Each RDD partition that is evicted out of memory will need to be rebuilt from source (ie. HDFS, Network, etc) which is expensive.A better solution would be to use
persist(StorageLevel.MEMORY_AND_DISK_ONLY)
which will spill the RDD partitions to the Worker's local disk if they're evicted from memory. In this case, rebuilding a partition only requires pulling data from the Worker's local disk which is relatively fast.You also have the choice of persisting the data as a serialized byte array by appending
_SER
as follows: MEMORY_SER
and MEMORY_AND_DISK_SER
. This can save space, but incurs an extra serialization/deserialization penalty. And because we're storing data as a serialized byte arrays, less Java objects are created and therefore GC pressure is reduced.You can also choose to replicate the data to another node by append
_2
to the StorageLevel (either serialized or not serialized) as follows: MEMORY_SER_2
and MEMORY_AND_DISK_2
. This enables fast partition recovery in the case of a node failure as data can be rebuilt from a rack-local, neighboring node through the same network switch, for example.You can see the full list here: https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.storage.StorageLevel$
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-14-2019 06:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2020 07:21 AM
Hello Mefryar,
I still see cache is an alias of persist(StorageLevel.MEMORY_ONLY). Attached doc links.
Official doc
Official Pyspark doc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2020 09:31 AM
Hi, @Sivagangireddy Singam Singam. I see that the RDD programming guide does say that the default storage level is
MEMORY_ONLY
, but the latest PySpark docs (2.4.4) state "The default storage level has changed to
MEMORY_AND_DISK
." (The PySpark docs you linked to were 2.1.2.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-04-2019 03:31 AM
Thanks for this clarification on deserialization penalty. I always wanted to know when this penalty is imposed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2017 02:24 AM
Hello,
What is most efficient between RDD and DataFrame ? (I mean better to cache, consume less memory)
Thanks you,

