Hi Community,
I encountered the following error:
Failed to store executor broadcast spark_join_relation_1622863 (size = Some(67141632)) in BlockManager with storageLevel=StorageLevel(memory, deserialized, 1 replicas)
in a Structured Streaming job in Databricks with foreachBatch writing to a Delta table.
I’ve observed that most of the failures occurred when table sizes were in the range of 69–75 MB, and the error suggests that Spark is unable to store the broadcasted table in memory.
When reviewing executors memory usage,

I noticed there was a few GBs of free memory available, but there was also high swap usage.
Given the free memory available, I would expect the executor to be able to hold the 69–80 MB table for broadcasting.
- Why couldn’t it hold this data around 80MB despite having free memory in GBs?
- Even if I disable the broadcast setting, I believe MERGE operations still enforce broadcasting internally.
- Is this error primarily due to the broadcast threshold, or is it related to insufficient memory in the executor?
- Since the error occurs when the executor cannot hold around 69–80 MB in memory, to handle this - should I increase the broadcast threshold to 100MB or decrease it?
Looking forward to hearing your thoughts and suggestions to solve this error!