- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ10-06-2021 12:51 PM
I'm new to Spark and trying to understand how some of its components work.
I understand that once the data is loaded into the memory of separate nodes, they process partitions in parallel, within their own memory (RAM).
But I'm wondering whether the initial partition loads into memory are done in parallel as well? AFAIK some SSDs allow for concurrent reads, but not sure whether that applies here.
Also, what exactly is partitioning in the context of Spark? Does the original file get split into different smaller files, or each nodes reads from a certain begin_byte to end_byte?
- Labels:
-
Parallelism
-
Partitioning
-
Read data
-
Spark
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ10-08-2021 12:11 AM
@Narek Margaryanโ , Normally the reading is done in parallel because the underlying file system is already distributed (if you use HDFS-based storage or something like, a data lake f.e.).
The number of partitions in the file itself also matters.
This leads me to your second question:
Partitioning in the context of spark is indeed the number of files being read/written.
There is a lot more to it, like shuffling, file format, and system parameters you can set, ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ10-08-2021 12:11 AM
@Narek Margaryanโ , Normally the reading is done in parallel because the underlying file system is already distributed (if you use HDFS-based storage or something like, a data lake f.e.).
The number of partitions in the file itself also matters.
This leads me to your second question:
Partitioning in the context of spark is indeed the number of files being read/written.
There is a lot more to it, like shuffling, file format, and system parameters you can set, ...

