OOM Issue in Streaming with foreachBatch()
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-03-2024 08:56 AM
- collect() on very small DataFrames (a few megabytes) --> driver memory is more than 20GB so it shouldn't be an issue
- Caching DataFrames and then unpersisting them
- Converting a single row to a DF
- Performing a cross join on a very small DataFrame
- Various filtering operations
- Writing the DataFrame to the target_table in append mode.
- When does Spark remove state from the driver metadata in a streaming application? Are there configurations to force more aggressive cleanup?
- Could frequent calls to collect() on small DataFrames still cause driver OOM issues? What alternatives can I use?
- Should I avoid caching DataFrames even if they are used multiple times within a microbatch? How can I optimize the caching strategy?
- Are there specific configurations or practices to better manage driver metadata and prevent memory bloat?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-03-2024 12:33 PM
From the information you provided, your issue might be resolved by setting a watermark on the streaming dataframe. The purpose of watermarks is to set a maximum time for records to be retained in state. Without a watermark, records in your state will accumulate in memory, eventually resulting in an OOM error. Additionally, your job could have other performance hits as state accumulates over time.
In your case, assuming it's not necessary to retain all records in state over the lifetime of the job, you should set a reasonable window for records to be removed from state. For example, you could apply a 10 minute watermark like this:
`df.withWatermark("event_time", "10 minutes")`
Please refer to this Databricks documentation article on watermarks, including code examples: https://docs.databricks.com/en/structured-streaming/watermarks.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-03-2024 02:28 PM
@xorbix_rshiva thanks for the reply! The streaming app does not keep state (foreachbatch), so watermark is unfortunately irrelevant and is not the solution here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2025 08:41 AM
Did you ever figure out what is causing the memory leak? We are experiencing a nearly identical issue where the memory gradually increases over time and OOM after a few days.
I did track down this open bug ticket that states there is a memory leak when a dataset is persisted even if it is unpersisted.
https://issues.apache.org/jira/browse/SPARK-35262

