Hi team,
we are trying to read multiple tiny XML files, able to parse them using the data bricks XML jar, but is there any way to read these files in parallel and distribute the load across the cluster?
right now our job is taking 90% of the time reading the files, there is only one transformation i.e. flattening the xmls.
Please suggest if there is any way to improve the performance.
code snippet:
def rawXml2df(fnames: List[String], ss: SparkSession): DataFrame = {
// print(s"fanames ${fnames.mkString(",")}")
ss.read
.format("com.databricks.spark.xml")
.schema(thSchema)
.option("rowTag", "ns2:TransactionHistory")
.option("attributePrefix", "_")
.load(fnames.mkString(","))
}
val df0 = rawXml2df(getListOfFiles(new File("ds-tools/aws-glue-local-test/src/main/scala/tracelink/ds/input")), sparkSession)
Logs:
2022-09-01 13:37:36 INFO - Finished task 14196.0 in stage 2.0 (TID 33078). 2258 bytes result sent to driver
2022-09-01 13:37:36 INFO - Starting task 14197.0 in stage 2.0 (TID 33079, localhost, executor driver, partition 14197, PROCESS_LOCAL, 8024 bytes)
2022-09-01 13:37:36 INFO - Finished task 14196.0 in stage 2.0 (TID 33078) in 44 ms on localhost (executor driver) (14197/18881)
2022-09-01 13:37:36 INFO - Running task 14197.0 in stage 2.0 (TID 33079)
2022-09-01 13:37:36 INFO - Input split: file:/Users/john/ds-tools/aws-glue-local-test/src/main/scala/ds/input/09426edf-39e0-44d7-bda5-be49ff56512e:0+2684