@Narek Margaryanโ , Normally the reading is done in parallel because the underlying file system is already distributed (if you use HDFS-based storage or something like, a data lake f.e.).
The number of partitions in the file itself also matters.
This leads me to your second question:
Partitioning in the context of spark is indeed the number of files being read/written.
There is a lot more to it, like shuffling, file format, and system parameters you can set, ...