01-18-2022 03:07 AM
Spark data frame with text data when schema is in Struct type spark is taking too much time to write / save / push data to ADLS or SQL Db or download as csv.
01-19-2022 04:21 PM
It's still hard to figure out exactly what's wrong, but my guess is the explode is creating a huge dataframe that's not able to fit into memory. It largely depends on how many rows you have and the size of the struct. if you have 100 rows and the struct is length/size 100 then you get 100x100 rows.
01-18-2022 12:51 PM
Can you share your code? and provide more details like size of detaset, cluster configuration. I don't also understand "Text data" as it seems as more complex data type.
01-18-2022 09:54 PM
I’m using YakeKeywordExtraction from SparkNLP to extract keywords, I’m facing an issue in saving result (spark data frame) to ADLS gen1 delta tables from Azure Databricks. Data frame comprise of strings in Struct schema format and I’m converting the struct schema to normal format by exploding and extracting required data. When I try to save this data frame to any of the target data sources ADLS/DB/toPandas/CSV. Max No of rows present in data frame would be 20 with 7 columns. The computational time for this notebook is 10min. But when the Final Df is ready saving the extracted data is taking close to 55hrs. I have tried to curb this time by implementing all types of optimization techniques listed out in various forums/communities like using execution.arrow.pyspark, RDD’s etc. nothing worked.
Code to Explode results:
scores = result \
.selectExpr("explode(arrays_zip(keywords.result, keywords.metadata)) as resultTuples") \
.selectExpr("resultTuples['0'] as keyword", "resultTuples['1'].score as score")
Code to write to ADLS:
scores.write.format("delta").save("path/to/adls/folder/result")
01-19-2022 04:21 PM
It's still hard to figure out exactly what's wrong, but my guess is the explode is creating a huge dataframe that's not able to fit into memory. It largely depends on how many rows you have and the size of the struct. if you have 100 rows and the struct is length/size 100 then you get 100x100 rows.
03-14-2022 08:27 AM
@shiva Santosh
Have to checked the count of the dataframe that you are trying to save to ADLS?
As @Joseph Kambourakis mentioned the explode can result in 1-many rows, better to check data frame count and see if Spark OOMs in the workspace.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group