โ01-15-2025 05:50 PM - edited โ01-15-2025 05:56 PM
I have a table in which one of the column contains xml raw data , approx. size of each row is 3MB, The volume of data is very huge, I have chunked it into 1 hour processing, On observing Memory Utilization metrics everything seems fine, but receiving below error org.apache.spark.SparkException: Job aborted due to stage failure: org.apache.spark.memory.SparkOutOfMemoryError: Photon ran out of memory while executing this query. Photon failed to reserve 6.7 MiB for BufferPool, in Current Column Batch, in FileScanNode(id=2513, output_schema=[string, string, string, bool, timestamp, date]), in task.
Tried Solution.
Allocate more memory - Doesn't work, most of the memory is free
Increase overhead memory - Doesn't work
Disable autoscaling
Photon is disabled only
Compute Configuration:
โ01-15-2025 06:22 PM
Hi @EktaPuri,
Was this failure observed before? can you share more context on what you are doing?
โ01-15-2025 06:41 PM - edited โ01-15-2025 06:44 PM
In here, from xml_raw data, we are extracting tags and their respective hex string values and decoding them and creating a Json object over it using rdd.map. Earlier it used to work, since data load was not that heavy, Now we are doing history load (not all history load, only files which were missed or are new), on 1 hour interval processing, I am joining the new records with older processed files, since I don't want to process file, which were only processed earlier, so broadcast that frame it only contain 1 column so only 400 mb size approx. it have, But major issue that in bronze layer the data provided seems to have high number of duplicates so we had to do drop duplicates on logfile_nm, that's one pain point. In here want to understand BufferPool memory is the part of executor memory, and upon investigation executor memory utilization seems fine, so where exactly the memory leak problem arising.
Also more info about error: Total task memory (including non-Photon): 1772.5 MiB task: allocated 1647.0 MiB, tracked 1772.5 MiB, untracked allocated 0.0 B, peak 1772.5 MiB BufferPool: allocated 2.5 MiB, tracked 128.0 MiB, untracked allocated 0.0 B, peak 128.0 MiB DataWriter: allocated 0.0 B, tracked 0.0 B, untracked allocated 0.0 B, peak 0.0 B FileScanNode(id=2161, output_schema=[string, string, string, bool, timestamp, date]): allocated 1644.5 MiB, tracked 1644.5 MiB, untracked allocated 0.0 B, peak 1644.5 MiB Current Column Batch: allocated 1472.9 MiB, tracked 1473.0 MiB, untracked allocated 0.0 B, peak 1473.0 MiB BufferPool: allocated 1472.9 MiB, tracked 1473.0 MiB, untracked allocated 0.0 B, peak 1473.0 MiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B dictionary values: allocated 4.0 KiB, tracked 4.0 KiB, untracked allocated 0.0 B, peak 4.0 KiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B dictionary values: allocated 8.0 KiB, tracked 8.0 KiB, untracked allocated 0.0 B, peak 8.0 KiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B
โ01-15-2025 07:48 PM
Try using memory intensive cluster with more driver and worker memory than now.
โ01-15-2025 07:54 PM
Hi Avinash,
Already tried.
Below you can see memory utilization is less only
โ01-15-2025 08:04 PM
Are you sure that the logic is being executed on workers and not entirely on driver? There are cases where the entire logic has to be executed on driver, in which the worker memory is under-utilised, likewise for spark.sql statements as spark session cannot be sent to multiple workers, so in that case the whole logic will run on driver memory which will lead to OOM on driver but under-utilised for worker memory.
โ01-15-2025 08:19 PM
Hi ,
That I am sure that logic is not running in driver.
Below is driver utilization, that's the question I am not sure where exactly memory leak is happening based upon error or logs.
โ01-15-2025 06:45 PM
Note: Photon is not enabled
โ01-15-2025 08:24 PM
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now