why providing list of filenames to spark.read.csv([file1,fiel2,file3]) is much faster than providing directory with wild card spark.read.csv("/path/*") ??
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-30-2022 09:45 PM
I have huge no of small files in s3 and I was going through few blog where people are telling that providing list of files is faster like (spark.read.csv([file1,file2,file3]) instead of giving directory with wild card
Reason : Spark actually does first extra `ls` ( listing down the files names) command on directory for reading the files..
Do you have any docs or any reference to justify those reasons. I know it might be true but want to get more details on that how spark read command work in behind
Labels:
- Labels:
-
Spark
0 REPLIES 0

