ConnectException error
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2019 01:25 PM
I'm using PySpark on Databricks and trying to pivot a 27753444 X 3 matrix.
If I do it in Spark DataFrame:
df = df.groupBy("A").pivot("B").avg("C")
it takes forever (after 2 hours and I canceled it).
If I convert it to pandas dataframe and then pivot:
pandas_df = pandas_df.pivot(index='A',columns='B',values='C').fillna(0)
It always gives me an error:
ConnectException: Connection refused (Connection refused)
Error while obtaining a new communication channel
ConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.
However, I've already increased memory of my clusters to 192 GB and it still doesn't work.
Can someone help?
Thanks!
Labels:
- Labels:
-
Connection error
-
Out-of-memory
-
Pandas
-
Pivot
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-17-2019 03:59 AM
Hi @Raymond_Hu,
This means that the driver crashed because of an OOM (Out of memory) exception and after that, it's not able to establish a new connection with the driver. Please try below options
- Try increasing driver-side memory and then retry.
- You can look at the spark job dag which give you more info on data flow.

