cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Kernel switches to unknown using pyspark

SusuTheSeeker
New Contributor III

I am working in jupyter hub in a notebook. I am using pyspark dataframe for analyzing text. More precisely I am doing sentimment analysis of newspaper articles. The code works until I get to some point where the kernel is busy and after approximately 10 minutes of being busy, it switches to unknown. The operations that cause it to stop working are for example .drop() and groupBy(). The dataset has only about 25k rows. After looking at the logs I get this message:

Stage 1:> (0 + 0) / 1] 22/06/02 09:30:17 WARN TaskSetManager: Stage 1 contains a task of very large size (234399 KiB). The maximum recommended task size is 1000 KiB.

After some research I found out that it is probably due to full memory. However I am not sure how to increase it.

To build the spark application I use this code:

spark = SparkSession.builder \
        .master("local") \
        .appName("x") \
        .config("spark.driver.memory", "2g") \
        .config("spark.executor.memory", "12g") \
        .getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)

Any ideas for the kernel to stop changing to "Unknown" or somehow free the memory? Note: I am not using RDDs just spark dataframes

I am sharing my notebook. This project is for my thesis and I am desperate to get the code working. Would be extremely thankful for any help!

8 REPLIES 8

-werners-
Esteemed Contributor III

do you actually run the code on a distributed environment (meaning a driver and multiple workers)?

If not, there is no use in using pyspark as all code will be executed locally.

No i do not. How could I do that? ​

-werners-
Esteemed Contributor III

Spark is a distributed data processing framework. For it to shine, you need multiple machines (VMs or physical). Otherwise it is no better than pandas etc (in local mode on a single node).

So to start using spark, you should either connect to an existing spark cluster (if there is a cluster available for you) or, and that might be the easiest way: sign up for Databricks Community Edition and start using Databricks.

Mind that Community Edition is limited in functionality, but still very useful.

https://docs.databricks.com/getting-started/quick-start.html

If you cannot do either, stop using pyspark and focus on pure python code.

You can still run into memory issues though as you run code locally.

Anonymous
Not applicable

Are you a Databricks customer? You can use a notebook in the webui and spin up a cluster very easily.

Thank you very much, I ill try to do that as it seems that that is the problem! Nevertheless, I managed to save the dataframe into CSV and from there to transform it to pandas (it did not work from me directly from spark df to pandas). Pandas works great with this dataset as it is not quite big. However, I am aware that it is not suitable for big data. So for big data, next time, I will try to connect to existing spark cluster.

Yes I am just a costumer I think. I will try to do that, thank you!

Kaniz
Community Manager
Community Manager

Hi @Suad Hidbani​ ​, We haven’t heard from you on the last responses from us, and I was checking back to see if you have a resolution yet. If you have any solution, please share it with the community as it can be helpful to others. Otherwise, we will respond with more details and try to help.

SusuTheSeeker
New Contributor III

Hi, unfortunately I do not have a solution. The solution would be to connect the dataset to an existing spark cluster. It seems that I had spark just locally and all the computations were done locally and that is why the kernel was failing.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.