I'm using PySpark on Databricks and trying to pivot a 27753444 X 3 matrix.
If I do it in Spark DataFrame:
df = df.groupBy("A").pivot("B").avg("C")
it takes forever (after 2 hours and I canceled it).
If I convert it to pandas dataframe and then pivot:
pandas_df = pandas_df.pivot(index='A',columns='B',values='C').fillna(0)
It always gives me an error:
ConnectException: Connection refused (Connection refused)
Error while obtaining a new communication channel
ConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.
However, I've already increased memory of my clusters to 192 GB and it still doesn't work.
Can someone help?
Thanks!