โ09-01-2022 10:45 AM
Hi team,
we are trying to read multiple tiny XML files, able to parse them using the data bricks XML jar, but is there any way to read these files in parallel and distribute the load across the cluster?
right now our job is taking 90% of the time reading the files, there is only one transformation i.e. flattening the xmls.
Please suggest if there is any way to improve the performance.
code snippet:
def rawXml2df(fnames: List[String], ss: SparkSession): DataFrame = {
// print(s"fanames ${fnames.mkString(",")}")
ss.read
.format("com.databricks.spark.xml")
.schema(thSchema)
.option("rowTag", "ns2:TransactionHistory")
.option("attributePrefix", "_")
.load(fnames.mkString(","))
}
val df0 = rawXml2df(getListOfFiles(new File("ds-tools/aws-glue-local-test/src/main/scala/tracelink/ds/input")), sparkSession)
Logs:
2022-09-01 13:37:36 INFO - Finished task 14196.0 in stage 2.0 (TID 33078). 2258 bytes result sent to driver
2022-09-01 13:37:36 INFO - Starting task 14197.0 in stage 2.0 (TID 33079, localhost, executor driver, partition 14197, PROCESS_LOCAL, 8024 bytes)
2022-09-01 13:37:36 INFO - Finished task 14196.0 in stage 2.0 (TID 33078) in 44 ms on localhost (executor driver) (14197/18881)
2022-09-01 13:37:36 INFO - Running task 14197.0 in stage 2.0 (TID 33079)
2022-09-01 13:37:36 INFO - Input split: file:/Users/john/ds-tools/aws-glue-local-test/src/main/scala/ds/input/09426edf-39e0-44d7-bda5-be49ff56512e:0+2684
โ09-01-2022 01:26 PM
As of my knowledge, there are not any options to optimize your code. https://github.com/databricks/spark-xml
It is the correct and the only way for reading XMLs, so on the databricks side, there is not much you can do except experiment with other cluster configurations.
Reading multiple small files is always slow. Therefore, it is common to know an issue called the "tiny files problem."
I don't know your architecture, but maybe when XMLs are saved, files can be appended to the previous one (or some trigger could merge them).
โ09-01-2022 01:26 PM
As of my knowledge, there are not any options to optimize your code. https://github.com/databricks/spark-xml
It is the correct and the only way for reading XMLs, so on the databricks side, there is not much you can do except experiment with other cluster configurations.
Reading multiple small files is always slow. Therefore, it is common to know an issue called the "tiny files problem."
I don't know your architecture, but maybe when XMLs are saved, files can be appended to the previous one (or some trigger could merge them).
โ09-05-2022 07:11 AM
Thank you for the follow-up. Added my new comment
โ09-05-2022 07:11 AM
Thank you @Hubert Dudekโ for the suggestion. Similar to your recommendation, we added a step in our pipeline to merge the small files to large files and make them available for the spark job.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group