cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Writing Spark data frame to ADLS is taking Huge time when Data Frame is of Text data.

Santosh09
New Contributor II

Spark data frame with text data when schema is in Struct type spark is taking too much time to write / save / push data to ADLS or SQL Db or download as csv.

image.png

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

It's still hard to figure out exactly what's wrong, but my guess is the explode is creating a huge dataframe that's not able to fit into memory. It largely depends on how many rows you have and the size of the struct. if you have 100 rows and the struct is length/size 100 then you get 100x100 rows.

View solution in original post

4 REPLIES 4

Hubert-Dudek
Esteemed Contributor III

Can you share your code? and provide more details like size of detaset, cluster configuration. I don't also understand "Text data" as it seems as more complex data type.

Santosh09
New Contributor II

I’m using YakeKeywordExtraction from SparkNLP to extract keywords, I’m facing an issue in saving result (spark data frame) to ADLS gen1 delta tables from Azure Databricks. Data frame comprise of strings in Struct schema format and I’m converting the struct schema to normal format by exploding and extracting required data. When I try to save this data frame to any of the target data sources ADLS/DB/toPandas/CSV. Max No of rows present in data frame would be 20 with 7 columns. The computational time for this notebook is 10min. But when the Final Df is ready saving the extracted data is taking close to 55hrs. I have tried to curb this time by implementing all types of optimization techniques listed out in various forums/communities like using execution.arrow.pyspark, RDD’s etc. nothing worked. 

Code to Explode results:

scores = result \
    .selectExpr("explode(arrays_zip(keywords.result, keywords.metadata)) as resultTuples") \
    .selectExpr("resultTuples['0'] as keyword", "resultTuples['1'].score as score")

Code to write to ADLS:

scores.write.format("delta").save("path/to/adls/folder/result")

Anonymous
Not applicable

It's still hard to figure out exactly what's wrong, but my guess is the explode is creating a huge dataframe that's not able to fit into memory. It largely depends on how many rows you have and the size of the struct. if you have 100 rows and the struct is length/size 100 then you get 100x100 rows.

User16764241763
Honored Contributor

@shiva Santosh​ 

Have to checked the count of the dataframe that you are trying to save to ADLS?

As @Joseph Kambourakis​  mentioned the explode can result in 1-many rows, better to check data frame count and see if Spark OOMs in the workspace.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group