cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows

satyasamal
New Contributor II

Hello All,

My Dataframe has 1 million records and it Contain XML files as column value . I am trying to parse the XML using Xpath function . It working fine for small records count . But it failed while trying to run 1 million records.

Error Message : -pyspark.errors.exceptions.connect.SparkException: Job aborted due to stage failure: Task 5 in stage 414054.0 failed 4 times, most recent failure: Lost task 5.14 in stage 414054.0 (TID 1658725) (172.18.1.205 executor 316): org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to abfss://........./__unitystorage/schemas/cb65ef1e-aed6-4a14-b92e-1bd9c830b491/tables/4580f459-ff87-49a4-9f7d-3902e67e0a91. SQLSTATE: 58030

Caused by: java.lang.RuntimeException: Error loading expression '/SSEVENT/KKGKXA/GKLO-HEADER/GKLO-KEY/GKLO-TYPE-CODE/text()

Caused by: java.util.MissingResourceException: Could not load any resource bundle by com.sun.org.apache.xerces.internal.impl.msg.XMLMessages

 

Is this a memory issue ? How to handle this situation 

 

1 ACCEPTED SOLUTION

Accepted Solutions

VZLA
Databricks Employee
Databricks Employee

Thank you for your question. The error is likely caused by memory issues or inefficient processing of the large dataset. Parsing XML with XPath is resource-intensive, and handling 1 million records requires optimization.

You can try df = df.repartition(100), or increasing the spark.cpu.tasks ratio from 1 to 2, or increase the executors size, this will at least give you insights on how much is it trully required and if the data is fully and evenly parallelised, to later on tune it further. 

View solution in original post

1 REPLY 1

VZLA
Databricks Employee
Databricks Employee

Thank you for your question. The error is likely caused by memory issues or inefficient processing of the large dataset. Parsing XML with XPath is resource-intensive, and handling 1 million records requires optimization.

You can try df = df.repartition(100), or increasing the spark.cpu.tasks ratio from 1 to 2, or increase the executors size, this will at least give you insights on how much is it trully required and if the data is fully and evenly parallelised, to later on tune it further. 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group