Kernel switches to unknown using pyspark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2022 03:45 AM
I am working in jupyter hub in a notebook. I am using pyspark dataframe for analyzing text. More precisely I am doing sentimment analysis of newspaper articles. The code works until I get to some point where the kernel is busy and after approximately 10 minutes of being busy, it switches to unknown. The operations that cause it to stop working are for example .drop() and groupBy(). The dataset has only about 25k rows. After looking at the logs I get this message:
Stage 1:> (0 + 0) / 1] 22/06/02 09:30:17 WARN TaskSetManager: Stage 1 contains a task of very large size (234399 KiB). The maximum recommended task size is 1000 KiB.
After some research I found out that it is probably due to full memory. However I am not sure how to increase it.
To build the spark application I use this code:
spark = SparkSession.builder \
.master("local") \
.appName("x") \
.config("spark.driver.memory", "2g") \
.config("spark.executor.memory", "12g") \
.getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)
Any ideas for the kernel to stop changing to "Unknown" or somehow free the memory? Note: I am not using RDDs just spark dataframes
I am sharing my notebook. This project is for my thesis and I am desperate to get the code working. Would be extremely thankful for any help!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2022 03:14 AM
do you actually run the code on a distributed environment (meaning a driver and multiple workers)?
If not, there is no use in using pyspark as all code will be executed locally.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2022 03:33 AM
No i do not. How could I do that?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2022 03:38 AM
Spark is a distributed data processing framework. For it to shine, you need multiple machines (VMs or physical). Otherwise it is no better than pandas etc (in local mode on a single node).
So to start using spark, you should either connect to an existing spark cluster (if there is a cluster available for you) or, and that might be the easiest way: sign up for Databricks Community Edition and start using Databricks.
Mind that Community Edition is limited in functionality, but still very useful.
https://docs.databricks.com/getting-started/quick-start.html
If you cannot do either, stop using pyspark and focus on pure python code.
You can still run into memory issues though as you run code locally.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2022 05:21 AM
Are you a Databricks customer? You can use a notebook in the webui and spin up a cluster very easily.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2022 06:42 AM
Thank you very much, I ill try to do that as it seems that that is the problem! Nevertheless, I managed to save the dataframe into CSV and from there to transform it to pandas (it did not work from me directly from spark df to pandas). Pandas works great with this dataset as it is not quite big. However, I am aware that it is not suitable for big data. So for big data, next time, I will try to connect to existing spark cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2022 06:43 AM
Yes I am just a costumer I think. I will try to do that, thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-13-2022 07:51 AM
Hi, unfortunately I do not have a solution. The solution would be to connect the dataset to an existing spark cluster. It seems that I had spark just locally and all the computations were done locally and that is why the kernel was failing.

