<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic why providing list of filenames to spark.read.csv([file1,fiel2,file3])  is much faster than providing directory with wild card spark.read.csv(&amp;quot;/path/*&amp;quot;) ?? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/why-providing-list-of-filenames-to-spark-read-csv-file1-fiel2/m-p/19206#M12850</link>
    <description>&lt;P&gt;I have huge no of small files in s3 and I was going through few blog where people are telling that providing list of files is faster like (spark.read.csv([file1,file2,file3]) instead of giving directory with wild card &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Reason :  Spark actually does first extra `ls` ( listing down the files names) command on directory for reading the files..&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you have any docs or any reference to justify those reasons. I know it might be true but want to get more details on that how spark read command work in behind&lt;/P&gt;</description>
    <pubDate>Tue, 31 May 2022 04:45:13 GMT</pubDate>
    <dc:creator>rakeshdey</dc:creator>
    <dc:date>2022-05-31T04:45:13Z</dc:date>
    <item>
      <title>why providing list of filenames to spark.read.csv([file1,fiel2,file3])  is much faster than providing directory with wild card spark.read.csv("/path/*") ??</title>
      <link>https://community.databricks.com/t5/data-engineering/why-providing-list-of-filenames-to-spark-read-csv-file1-fiel2/m-p/19206#M12850</link>
      <description>&lt;P&gt;I have huge no of small files in s3 and I was going through few blog where people are telling that providing list of files is faster like (spark.read.csv([file1,file2,file3]) instead of giving directory with wild card &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Reason :  Spark actually does first extra `ls` ( listing down the files names) command on directory for reading the files..&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you have any docs or any reference to justify those reasons. I know it might be true but want to get more details on that how spark read command work in behind&lt;/P&gt;</description>
      <pubDate>Tue, 31 May 2022 04:45:13 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/why-providing-list-of-filenames-to-spark-read-csv-file1-fiel2/m-p/19206#M12850</guid>
      <dc:creator>rakeshdey</dc:creator>
      <dc:date>2022-05-31T04:45:13Z</dc:date>
    </item>
  </channel>
</rss>

