cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

dataframe takes unusually long time to save as a delta table using sql for a very small dataset with 30k rows. It takes around 2hrs. Is there a solution for this problem?

suresh1122
New Contributor III

I am trying to save a dataframe after a series of data manipulations using Udf functions to a delta table. I tried using this code

(

 df

 .write

 .format('delta')

 .mode('overwrite')

 .option('overwriteSchema', 'true')

 .saveAsTable('output_table')

)

but this is taking more than 2 hours. So I converted the dataframe into a sql local temp view and tried saving the df as a delta table from that temp view, this worked for one of the notebooks(14 minutes) but for other notebooks this is also taking around 2 hours to write to the delta table. Not sure why this is happening for a very small dataset. Any solution is appreciated.

code:

df.createOrReplaceTempView("sql_temp_view")

%sql

DROP TABLE IF EXISTS default.output_version_2;

create table default.output_version_2

select * from sql_temp_view

11 REPLIES 11

UmaMahesh1
Honored Contributor III

What is the cluster config you are using ? Also what sort of transformations are being done before your final dataframe is getting created ?

Screenshot (232)This is the cluster config & transformations like data cleanup using filters and search operations using dictionaries

UmaMahesh1
Honored Contributor III

Can you also give the number of partitions the df has ?

you can use df.rdd.getNumPartitions()

96 partitions

UmaMahesh1
Honored Contributor III

Since data is too low, try repartitioning that data before you write using repartition or coalesce.

I too have similar issue, the no.of partition is 1 at table level and transformation only appyling like date, decimal(20, 2)..etc using withColumn. 5 worker nodes.

1,80,890 records taking 10min time. - how to improve the performance and what are the possible ways to find where it is taking time ?

Ajay-Pandey
Esteemed Contributor III

Hi @Suresh Kakarlapudi​ what is your file size??

35 MB

Jfoxyyc
Valued Contributor

Is your databricks workspace set up as vnet injection by any chance?

Fadhi
New Contributor II

@Jfoxyyc i am having similar problem and cam across the post. Do vnet injection cause this as my workspace is set up like that

Lakshay
Esteemed Contributor
Esteemed Contributor

You should also look into the sql plan if the writing phase is indeed the part that is taking time. Since spark works on lazy evaluation, there might be some other phase that might be taking time

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.