โ02-24-2015 03:40 PM
โ02-24-2015 03:40 PM
You should definitely
cache()
RDD's and DataFrames in the following cases:map()
, filter()
, etc.) This helps in the recovery process if a Worker node dies.Keep in mind that Spark will automatically evict RDD partitions from Workers in an LRU manner. The LRU eviction happens independently on each Worker and depends on the available memory in the Worker.
During the lifecycle of an RDD, RDD partitions may exist in memory or on disk across the cluster depending on available memory.
The Storage tab on the Spark UI shows where partitions exist (memory or disk) across the cluster at any given point in time.
Note that
cache()
is an alias for persist(StorageLevel.MEMORY_ONLY)
which may not be ideal for datasets larger than available cluster memory. Each RDD partition that is evicted out of memory will need to be rebuilt from source (ie. HDFS, Network, etc) which is expensive.A better solution would be to use
persist(StorageLevel.MEMORY_AND_DISK_ONLY)
which will spill the RDD partitions to the Worker's local disk if they're evicted from memory. In this case, rebuilding a partition only requires pulling data from the Worker's local disk which is relatively fast.You also have the choice of persisting the data as a serialized byte array by appending
_SER
as follows: MEMORY_SER
and MEMORY_AND_DISK_SER
. This can save space, but incurs an extra serialization/deserialization penalty. And because we're storing data as a serialized byte arrays, less Java objects are created and therefore GC pressure is reduced.You can also choose to replicate the data to another node by append
_2
to the StorageLevel (either serialized or not serialized) as follows: MEMORY_SER_2
and MEMORY_AND_DISK_2
. This enables fast partition recovery in the case of a node failure as data can be rebuilt from a rack-local, neighboring node through the same network switch, for example.You can see the full list here: https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.storage.StorageLevel$
โ05-14-2019 06:26 AM
โ01-18-2020 07:21 AM
Hello Mefryar,
I still see cache is an alias of persist(StorageLevel.MEMORY_ONLY). Attached doc links.
Official doc
Official Pyspark doc
โ01-21-2020 09:31 AM
Hi, @Sivagangireddy Singamโ Singam. I see that the RDD programming guide does say that the default storage level is
MEMORY_ONLY
, but the latest PySpark docs (2.4.4) state "The default storage level has changed to
MEMORY_AND_DISK
." (The PySpark docs you linked to were 2.1.2.)
โ07-04-2019 03:31 AM
Thanks for this clarification on deserialization penalty. I always wanted to know when this penalty is imposed.
โ04-11-2017 02:24 AM
Hello,
What is most efficient between RDD and DataFrame ? (I mean better to cache, consume less memory)
Thanks you,
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group