cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to read multiple tiny XML files in parallel

Paramesh
New Contributor II

Hi team,

we are trying to read multiple tiny XML files, able to parse them using the data bricks XML jar, but is there any way to read these files in parallel and distribute the load across the cluster?

right now our job is taking 90% of the time reading the files, there is only one transformation i.e. flattening the xmls.

Please suggest if there is any way to improve the performance.

code snippet:

    def rawXml2df(fnames: List[String], ss: SparkSession): DataFrame = {
     // print(s"fanames ${fnames.mkString(",")}")
      ss.read
        .format("com.databricks.spark.xml")
        .schema(thSchema)
        .option("rowTag", "ns2:TransactionHistory")
        .option("attributePrefix", "_")
        .load(fnames.mkString(","))
    }
 
val df0 = rawXml2df(getListOfFiles(new File("ds-tools/aws-glue-local-test/src/main/scala/tracelink/ds/input")), sparkSession)
 
Logs: 
 
2022-09-01 13:37:36 INFO  - Finished task 14196.0 in stage 2.0 (TID 33078). 2258 bytes result sent to driver
2022-09-01 13:37:36 INFO  - Starting task 14197.0 in stage 2.0 (TID 33079, localhost, executor driver, partition 14197, PROCESS_LOCAL, 8024 bytes)
2022-09-01 13:37:36 INFO  - Finished task 14196.0 in stage 2.0 (TID 33078) in 44 ms on localhost (executor driver) (14197/18881)
2022-09-01 13:37:36 INFO  - Running task 14197.0 in stage 2.0 (TID 33079)
2022-09-01 13:37:36 INFO  - Input split: file:/Users/john/ds-tools/aws-glue-local-test/src/main/scala/ds/input/09426edf-39e0-44d7-bda5-be49ff56512e:0+2684
 

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

As of my knowledge, there are not any options to optimize your code. https://github.com/databricks/spark-xml

It is the correct and the only way for reading XMLs, so on the databricks side, there is not much you can do except experiment with other cluster configurations.

Reading multiple small files is always slow. Therefore, it is common to know an issue called the "tiny files problem."

I don't know your architecture, but maybe when XMLs are saved, files can be appended to the previous one (or some trigger could merge them).

View solution in original post

4 REPLIES 4

Hubert-Dudek
Esteemed Contributor III

As of my knowledge, there are not any options to optimize your code. https://github.com/databricks/spark-xml

It is the correct and the only way for reading XMLs, so on the databricks side, there is not much you can do except experiment with other cluster configurations.

Reading multiple small files is always slow. Therefore, it is common to know an issue called the "tiny files problem."

I don't know your architecture, but maybe when XMLs are saved, files can be appended to the previous one (or some trigger could merge them).

Kaniz
Community Manager
Community Manager

Hi @Paramesh Nalla​ , We haven't heard from you on the last response from @Hubert Dudek​ , and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community as it can be helpful to others. Also, Please don't forget to click on the "Select As Best" button whenever the information provided helps resolve your question.

Paramesh
New Contributor II

Thank you for the follow-up. Added my new comment

Paramesh
New Contributor II

Thank you @Hubert Dudek​ for the suggestion. Similar to your recommendation, we added a step in our pipeline to merge the small files to large files and make them available for the spark job.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.