Driver Crash on processing large dataframe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2024 08:17 AM
I have a dataframe with abt 2 million text rows (1gb). I partition it into about 700 parititons as thats the no of cores available on my cluster exceutors. I run the transformations extracting medical information and then write the results in parquet on S3. The process runs for 3 hrs and then crashes. The driver crashed with following error.
The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.
I have tried driver with both 128gb and 256 memory but end up in same result. Also i have used the persist option with similar crashes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2024 10:22 AM
Hi @desertstorm , The error "The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached." usually happens when the driver is under memory pressure. This means that there is a piece of code that is executing on the driver and not on executors. We need to identify and remove that piece if code.
Here are some general suggestions to watch out:-
1. If your code has display or collect operations, you should remove that.
2. If your code has python piece of code, you need to replace that with pyspark.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2024 03:09 PM
Hi @Lakshay Thanks so much for your reply. I have looked into most of those options and dont see any python code. Its mostly pipeline.transform. Here is the code where it crashes. I feel it should not bring to the driver either for with column or for writing to parquet. so not sure whats wrong. Happy to share the file as well
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2024 03:18 AM
Hi @desertstorm , I think the issue is with "Process rxnorm results" part of the code. You can try to comment out that part to confirm if that is correct.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2024 03:52 PM
just wondering where the magic number "768" in your repartition is coming from? how big is your cluster? what about your driver instance?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2024 04:06 PM
thats the no of cores available on executors. i have tried driver with 256 gb as well as 128gb with same results
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2024 11:28 AM
can you try to split the data?
do you have any collect() or any other driver's heavy actions that could cause this error in the driver?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2025 02:19 PM
I am encountering the same issue. My dataframe is about 7 million rows. I tried reducing the dataframe size. Anything over a million rows, the write operation doesn't finish and I see the driver error.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2025 03:07 PM
Hey @Svish ,
Your problem is probably caused by using Pandas. Pandas loads all the data into the driver memory, which is likely why you are experiencing issues. If you can modify your code to use Spark instead, you will probably avoid this problem.
However, if switching to PySpark is not an option, I recommend increasing the driver size to handle the larger data load.
🙂

