โ01-18-2022 03:07 AM
Spark data frame with text data when schema is in Struct type spark is taking too much time to write / save / push data to ADLS or SQL Db or download as csv.
โ01-19-2022 04:21 PM
It's still hard to figure out exactly what's wrong, but my guess is the explode is creating a huge dataframe that's not able to fit into memory. It largely depends on how many rows you have and the size of the struct. if you have 100 rows and the struct is length/size 100 then you get 100x100 rows.
โ01-18-2022 12:51 PM
Can you share your code? and provide more details like size of detaset, cluster configuration. I don't also understand "Text data" as it seems as more complex data type.
โ01-18-2022 09:54 PM
Iโm using YakeKeywordExtraction from SparkNLP to extract keywords, Iโm facing an issue in saving result (spark data frame) to ADLS gen1 delta tables from Azure Databricks. Data frame comprise of strings in Struct schema format and Iโm converting the struct schema to normal format by exploding and extracting required data. When I try to save this data frame to any of the target data sources ADLS/DB/toPandas/CSV. Max No of rows present in data frame would be 20 with 7 columns. The computational time for this notebook is 10min. But when the Final Df is ready saving the extracted data is taking close to 55hrs. I have tried to curb this time by implementing all types of optimization techniques listed out in various forums/communities like using execution.arrow.pyspark, RDDโs etc. nothing worked.
Code to Explode results:
scores = result \
.selectExpr("explode(arrays_zip(keywords.result, keywords.metadata)) as resultTuples") \
.selectExpr("resultTuples['0'] as keyword", "resultTuples['1'].score as score")
Code to write to ADLS:
scores.write.format("delta").save("path/to/adls/folder/result")
โ01-19-2022 04:21 PM
It's still hard to figure out exactly what's wrong, but my guess is the explode is creating a huge dataframe that's not able to fit into memory. It largely depends on how many rows you have and the size of the struct. if you have 100 rows and the struct is length/size 100 then you get 100x100 rows.
โ03-14-2022 08:27 AM
@shiva Santoshโ
Have to checked the count of the dataframe that you are trying to save to ADLS?
As @Joseph Kambourakisโ mentioned the explode can result in 1-many rows, better to check data frame count and see if Spark OOMs in the workspace.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now