cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Getting OOM error while processing xml data

EktaPuri
New Contributor II

I have a table in which one of the column contains xml raw data , approx. size of each row is 3MB, The volume of data is very huge, I have chunked it into 1 hour processing, On observing Memory Utilization metrics everything seems fine, but receiving below error org.apache.spark.SparkException: Job aborted due to stage failure: org.apache.spark.memory.SparkOutOfMemoryError: Photon ran out of memory while executing this query. Photon failed to reserve 6.7 MiB for BufferPool, in Current Column Batch, in FileScanNode(id=2513, output_schema=[string, string, string, bool, timestamp, date]), in task.

Tried Solution.

Allocate more memory - Doesn't work, most of the memory is free

Increase overhead memory - Doesn't work

Disable autoscaling 

Photon is disabled only

Compute Configuration: 

  • DRV: 15.4 LTS (includes Apache Spark 3.5.0, Scala 2.12)
  • Photon Acceleration: Disable
  • Worker Type: Standard_E32_v3, driver type is same
  • Auto Scaling: 1-8

 

8 REPLIES 8

Alberto_Umana
Databricks Employee
Databricks Employee

Hi @EktaPuri,

Was this failure observed before? can you share more context on what you are doing?

EktaPuri
New Contributor II

Hi  @Alberto_Umana 

In here, from xml_raw data, we are extracting tags and their respective hex string values and decoding them and creating a Json object over it using rdd.map. Earlier it used to work, since data load was not that heavy, Now we are doing history load (not all history load, only files which were missed or are new), on 1 hour interval processing, I am joining the new records with older processed files, since I don't want to process file, which were only processed earlier, so broadcast that frame it only contain 1 column so only 400 mb size approx. it have, But major issue that in bronze layer the data provided seems to have high number of duplicates so we had to do drop duplicates on logfile_nm, that's one pain point. In here want to understand BufferPool memory is the part of executor memory, and upon investigation executor memory utilization seems fine, so where exactly the memory leak problem arising.

Also more info about error:  Total task memory (including non-Photon): 1772.5 MiB task: allocated 1647.0 MiB, tracked 1772.5 MiB, untracked allocated 0.0 B, peak 1772.5 MiB BufferPool: allocated 2.5 MiB, tracked 128.0 MiB, untracked allocated 0.0 B, peak 128.0 MiB DataWriter: allocated 0.0 B, tracked 0.0 B, untracked allocated 0.0 B, peak 0.0 B FileScanNode(id=2161, output_schema=[string, string, string, bool, timestamp, date]): allocated 1644.5 MiB, tracked 1644.5 MiB, untracked allocated 0.0 B, peak 1644.5 MiB Current Column Batch: allocated 1472.9 MiB, tracked 1473.0 MiB, untracked allocated 0.0 B, peak 1473.0 MiB BufferPool: allocated 1472.9 MiB, tracked 1473.0 MiB, untracked allocated 0.0 B, peak 1473.0 MiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B dictionary values: allocated 4.0 KiB, tracked 4.0 KiB, untracked allocated 0.0 B, peak 4.0 KiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B dictionary values: allocated 8.0 KiB, tracked 8.0 KiB, untracked allocated 0.0 B, peak 8.0 KiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B

Avinash_Narala
Valued Contributor II

Try using memory intensive cluster with more driver and worker memory than now. 

Hi Avinash,

Already tried.

EktaPuri_0-1736999629659.png

Below you can see memory utilization is less only

Avinash_Narala
Valued Contributor II

Are you sure that the logic is being executed on workers and not entirely on driver? There are cases where the entire logic has to be executed on driver, in which the worker memory is under-utilised, likewise for spark.sql statements as spark session cannot be sent to multiple workers, so in that case the whole logic will run on driver memory which will lead to OOM on driver but under-utilised for worker memory.

Hi ,

That I am sure that logic is not running in driver.

Below is driver utilization, that's the question I am not sure where exactly memory leak is happening based upon error or logs.

EktaPuri_0-1737001087654.png

 

EktaPuri
New Contributor II

Note: Photon is not enabled 

EktaPuri
New Contributor II
 filteredDataframe=spark.table(f'{sourceConfig["srcDatabaseName"]}.{sourceConfig["srcTableName"]}').filter(f.col("load_dt")==current_start_time.date()).filter(f.col("load_ts")>=current_start_time).filter(f.col("load_ts")<current_end_time).filter("col1=='value'").filter(f.col("col2")=="true").select("col3","col4","col5","col6", "col7", "col8").dropDuplicates(["col4"])
Dataframe=filteredDataframe.join(f.broadcast(metadata),"col4","leftanti"), Joining and Duplicating is happening, Have a look here, I have increased the autobroadcast threshold to 1g, even size of broadcast is approx 400MB

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group