cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to read multiple tiny XML files in parallel

Paramesh
New Contributor II

Hi team,

we are trying to read multiple tiny XML files, able to parse them using the data bricks XML jar, but is there any way to read these files in parallel and distribute the load across the cluster?

right now our job is taking 90% of the time reading the files, there is only one transformation i.e. flattening the xmls.

Please suggest if there is any way to improve the performance.

code snippet:

    def rawXml2df(fnames: List[String], ss: SparkSession): DataFrame = {
     // print(s"fanames ${fnames.mkString(",")}")
      ss.read
        .format("com.databricks.spark.xml")
        .schema(thSchema)
        .option("rowTag", "ns2:TransactionHistory")
        .option("attributePrefix", "_")
        .load(fnames.mkString(","))
    }
 
val df0 = rawXml2df(getListOfFiles(new File("ds-tools/aws-glue-local-test/src/main/scala/tracelink/ds/input")), sparkSession)
 
Logs: 
 
2022-09-01 13:37:36 INFO  - Finished task 14196.0 in stage 2.0 (TID 33078). 2258 bytes result sent to driver
2022-09-01 13:37:36 INFO  - Starting task 14197.0 in stage 2.0 (TID 33079, localhost, executor driver, partition 14197, PROCESS_LOCAL, 8024 bytes)
2022-09-01 13:37:36 INFO  - Finished task 14196.0 in stage 2.0 (TID 33078) in 44 ms on localhost (executor driver) (14197/18881)
2022-09-01 13:37:36 INFO  - Running task 14197.0 in stage 2.0 (TID 33079)
2022-09-01 13:37:36 INFO  - Input split: file:/Users/john/ds-tools/aws-glue-local-test/src/main/scala/ds/input/09426edf-39e0-44d7-bda5-be49ff56512e:0+2684
 

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

As of my knowledge, there are not any options to optimize your code. https://github.com/databricks/spark-xml

It is the correct and the only way for reading XMLs, so on the databricks side, there is not much you can do except experiment with other cluster configurations.

Reading multiple small files is always slow. Therefore, it is common to know an issue called the "tiny files problem."

I don't know your architecture, but maybe when XMLs are saved, files can be appended to the previous one (or some trigger could merge them).

View solution in original post

3 REPLIES 3

Hubert-Dudek
Esteemed Contributor III

As of my knowledge, there are not any options to optimize your code. https://github.com/databricks/spark-xml

It is the correct and the only way for reading XMLs, so on the databricks side, there is not much you can do except experiment with other cluster configurations.

Reading multiple small files is always slow. Therefore, it is common to know an issue called the "tiny files problem."

I don't know your architecture, but maybe when XMLs are saved, files can be appended to the previous one (or some trigger could merge them).

Thank you for the follow-up. Added my new comment

Paramesh
New Contributor II

Thank you @Hubert Dudekโ€‹ for the suggestion. Similar to your recommendation, we added a step in our pipeline to merge the small files to large files and make them available for the spark job.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group