3 weeks ago - last edited 3 weeks ago
I have a table in which one of the column contains xml raw data , approx. size of each row is 3MB, The volume of data is very huge, I have chunked it into 1 hour processing, On observing Memory Utilization metrics everything seems fine, but receiving below error org.apache.spark.SparkException: Job aborted due to stage failure: org.apache.spark.memory.SparkOutOfMemoryError: Photon ran out of memory while executing this query. Photon failed to reserve 6.7 MiB for BufferPool, in Current Column Batch, in FileScanNode(id=2513, output_schema=[string, string, string, bool, timestamp, date]), in task.
Tried Solution.
Allocate more memory - Doesn't work, most of the memory is free
Increase overhead memory - Doesn't work
Disable autoscaling
Photon is disabled only
Compute Configuration:
3 weeks ago
Hi @EktaPuri,
Was this failure observed before? can you share more context on what you are doing?
3 weeks ago - last edited 3 weeks ago
In here, from xml_raw data, we are extracting tags and their respective hex string values and decoding them and creating a Json object over it using rdd.map. Earlier it used to work, since data load was not that heavy, Now we are doing history load (not all history load, only files which were missed or are new), on 1 hour interval processing, I am joining the new records with older processed files, since I don't want to process file, which were only processed earlier, so broadcast that frame it only contain 1 column so only 400 mb size approx. it have, But major issue that in bronze layer the data provided seems to have high number of duplicates so we had to do drop duplicates on logfile_nm, that's one pain point. In here want to understand BufferPool memory is the part of executor memory, and upon investigation executor memory utilization seems fine, so where exactly the memory leak problem arising.
Also more info about error: Total task memory (including non-Photon): 1772.5 MiB task: allocated 1647.0 MiB, tracked 1772.5 MiB, untracked allocated 0.0 B, peak 1772.5 MiB BufferPool: allocated 2.5 MiB, tracked 128.0 MiB, untracked allocated 0.0 B, peak 128.0 MiB DataWriter: allocated 0.0 B, tracked 0.0 B, untracked allocated 0.0 B, peak 0.0 B FileScanNode(id=2161, output_schema=[string, string, string, bool, timestamp, date]): allocated 1644.5 MiB, tracked 1644.5 MiB, untracked allocated 0.0 B, peak 1644.5 MiB Current Column Batch: allocated 1472.9 MiB, tracked 1473.0 MiB, untracked allocated 0.0 B, peak 1473.0 MiB BufferPool: allocated 1472.9 MiB, tracked 1473.0 MiB, untracked allocated 0.0 B, peak 1473.0 MiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B dictionary values: allocated 4.0 KiB, tracked 4.0 KiB, untracked allocated 0.0 B, peak 4.0 KiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B dictionary values: allocated 8.0 KiB, tracked 8.0 KiB, untracked allocated 0.0 B, peak 8.0 KiB dictionary values: allocated 1024.0 B, tracked 1024.0 B, untracked allocated 0.0 B, peak 1024.0 B
3 weeks ago
Try using memory intensive cluster with more driver and worker memory than now.
3 weeks ago
Hi Avinash,
Already tried.
Below you can see memory utilization is less only
3 weeks ago
Are you sure that the logic is being executed on workers and not entirely on driver? There are cases where the entire logic has to be executed on driver, in which the worker memory is under-utilised, likewise for spark.sql statements as spark session cannot be sent to multiple workers, so in that case the whole logic will run on driver memory which will lead to OOM on driver but under-utilised for worker memory.
3 weeks ago
Hi ,
That I am sure that logic is not running in driver.
Below is driver utilization, that's the question I am not sure where exactly memory leak is happening based upon error or logs.
3 weeks ago
Note: Photon is not enabled
3 weeks ago
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group