<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Read excel files and append to make one data frame in Databricks from azure data lake without specific file names in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/read-excel-files-and-append-to-make-one-data-frame-in-databricks/m-p/22492#M15406</link>
    <description>&lt;P&gt;If you are attempting to read all the files in a directory you should be able to use a wild card and filter using the extension. For example: &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;df = (spark
.read
.format("com.crealytics.spark.excel")
.option("header", "True")
.option("inferSchema", "true")
.option("dataAddress", "'Usage Dataset'!A2")
.load('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/*_Usage_Dataset.xlsx')
&amp;nbsp;)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Should read all the .xlsx files in that directory. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you want to read a subset of files then you can loop through, get all the file paths in a python list, and then provide that list when reading. For example: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;files = dbutils.fs.ls('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/')
files_list = []
&amp;nbsp;
for f in files:
    # use an if statement to determine if you want to append the path to the list
    files_list.append(f.path)
&amp;nbsp;
df = (spark
.read
.format("com.crealytics.spark.excel")
.option("header", "True")
.option("inferSchema", "true")
.option("dataAddress", "'Usage Dataset'!A2")
.load(files_list)
 )&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 21 Jun 2021 18:33:00 GMT</pubDate>
    <dc:creator>Ryan_Chynoweth</dc:creator>
    <dc:date>2021-06-21T18:33:00Z</dc:date>
    <item>
      <title>Read excel files and append to make one data frame in Databricks from azure data lake without specific file names</title>
      <link>https://community.databricks.com/t5/data-engineering/read-excel-files-and-append-to-make-one-data-frame-in-databricks/m-p/22491#M15405</link>
      <description>&lt;P&gt;I am storing excel files in Azure data lake (gen 1). They follow filenames follow the same pattern "2021-06-18T09_00_07ONR_Usage_Dataset", "2021-06-18T09_00_07DSS_Usage_Dataset", etc. depending on the date and time. I want to read all the files in the folder located in Azure data lake to databricks without having to name the specific file so in the future new files are read and appended to make one big data set. The files are all the same schema, columns are in the same order, etc. So far I have tried for loops with regex expressions:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;path = dbutils.fs.ls('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/')
for fi in path: `for fi in path: 
  print(fi)
  read = spark.read.format("com.crealytics.spark.excel").option("header", "True").option("inferSchema", "true").option("dataAddress", "'Usage Dataset'!A2").load(fi.path)
  display(read)
  print(read.count())&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;The output print all the paths and it counts each dataset that is being read, but it only displays the last one. I understand because I'm not storing it or appending in the for loop, but when I add append it breaks.&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;appended_data = []
path = dbutils.fs.ls('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/')
    for fi in path: `for fi in path: 
      print(fi)
      read = spark.read.format("com.crealytics.spark.excel").option("header", "True").option("inferSchema", "true").option("dataAddress", "'Usage Dataset'!A2").load(fi.path)
      display(read)
      print(read.count())
      appended_data.append(read)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;But I get this error, FileInfo(path='dbfs:/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/Initialization_DSS.xlsx', name='Initialization_DSS.xlsx', size=39781) TypeError: not supported type: &amp;lt;class 'py4j.java_gateway.JavaObject'&amp;gt;&lt;/P&gt;&lt;P&gt;The final way I tried:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;li = []
for f in glob.glob('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/*_Usage_Dataset.xlsx'):
    df = pd.read_xlsx(f)  
    li.append(df)
    frame = pd.concat(li, axis =0, ignore_index = True)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;This says that there are no object to concatenate. I have been researching everywhere and trying everything. Please help.&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jun 2021 19:39:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/read-excel-files-and-append-to-make-one-data-frame-in-databricks/m-p/22491#M15405</guid>
      <dc:creator>User16765131552</dc:creator>
      <dc:date>2021-06-18T19:39:40Z</dc:date>
    </item>
    <item>
      <title>Re: Read excel files and append to make one data frame in Databricks from azure data lake without specific file names</title>
      <link>https://community.databricks.com/t5/data-engineering/read-excel-files-and-append-to-make-one-data-frame-in-databricks/m-p/22492#M15406</link>
      <description>&lt;P&gt;If you are attempting to read all the files in a directory you should be able to use a wild card and filter using the extension. For example: &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;df = (spark
.read
.format("com.crealytics.spark.excel")
.option("header", "True")
.option("inferSchema", "true")
.option("dataAddress", "'Usage Dataset'!A2")
.load('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/*_Usage_Dataset.xlsx')
&amp;nbsp;)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Should read all the .xlsx files in that directory. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you want to read a subset of files then you can loop through, get all the file paths in a python list, and then provide that list when reading. For example: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;files = dbutils.fs.ls('/mnt/adls/40_project/UBC/WIP/Mercury/UUR_PS_raw_temp/')
files_list = []
&amp;nbsp;
for f in files:
    # use an if statement to determine if you want to append the path to the list
    files_list.append(f.path)
&amp;nbsp;
df = (spark
.read
.format("com.crealytics.spark.excel")
.option("header", "True")
.option("inferSchema", "true")
.option("dataAddress", "'Usage Dataset'!A2")
.load(files_list)
 )&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 21 Jun 2021 18:33:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/read-excel-files-and-append-to-make-one-data-frame-in-databricks/m-p/22492#M15406</guid>
      <dc:creator>Ryan_Chynoweth</dc:creator>
      <dc:date>2021-06-21T18:33:00Z</dc:date>
    </item>
  </channel>
</rss>

