I have huge no of small files in s3 and I was going through few blog where people are telling that providing list of files is faster like (spark.read.csv([file1,file2,file3]) instead of giving directory with wild card
Reason : Spark actually does first extra `ls` ( listing down the files names) command on directory for reading the files..
Do you have any docs or any reference to justify those reasons. I know it might be true but want to get more details on that how spark read command work in behind