cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

pandas udf type grouped map fails

user_b22ce5eeAl
New Contributor II

Hello,

I am trying to get the shap values for my whole dataset using pandas udf for each category of a categorical variable. It runs well when I run it on a few categories but when I want to run the function on the whole dataset my job fails. I see spills both on memory and disk and my shuffle read is around 40GB. I am not sure how to optimize my spark job here, I increased my cores to 160 and also Memory for both driver and workers but still not successful.

Any suggestion will be highly appreciated.

Thanks

2 REPLIES 2

user_b22ce5eeAl
New Contributor II

was able to get it done by increasing driver memory!

Jackson
New Contributor II

I want to use data.groupby.apply() to apply a function to each row of my Pyspark Dataframe per group.

I used The Grouped Map Pandas UDFs. However I can't figure out how to add another argument to my function. DGCustomerFirst Survey

I tried using the argument as a global variable but the function doesn't recongnize it ( my argument is a pyspark dataframe)

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.