cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

dataframe takes unusually long time to save as a delta table using sql for a very small dataset with 30k rows. It takes around 2hrs. Is there a solution for this problem?

suresh1122
New Contributor III

I am trying to save a dataframe after a series of data manipulations using Udf functions to a delta table. I tried using this code

(

 df

 .write

 .format('delta')

 .mode('overwrite')

 .option('overwriteSchema', 'true')

 .saveAsTable('output_table')

)

but this is taking more than 2 hours. So I converted the dataframe into a sql local temp view and tried saving the df as a delta table from that temp view, this worked for one of the notebooks(14 minutes) but for other notebooks this is also taking around 2 hours to write to the delta table. Not sure why this is happening for a very small dataset. Any solution is appreciated.

code:

df.createOrReplaceTempView("sql_temp_view")

%sql

DROP TABLE IF EXISTS default.output_version_2;

create table default.output_version_2

select * from sql_temp_view

12 REPLIES 12

UmaMahesh1
Honored Contributor III

What is the cluster config you are using ? Also what sort of transformations are being done before your final dataframe is getting created ?

Uma Mahesh D

Screenshot (232)This is the cluster config & transformations like data cleanup using filters and search operations using dictionaries

UmaMahesh1
Honored Contributor III

Can you also give the number of partitions the df has ?

you can use df.rdd.getNumPartitions()

Uma Mahesh D

96 partitions

UmaMahesh1
Honored Contributor III

Since data is too low, try repartitioning that data before you write using repartition or coalesce.

Uma Mahesh D

I too have similar issue, the no.of partition is 1 at table level and transformation only appyling like date, decimal(20, 2)..etc using withColumn. 5 worker nodes.

1,80,890 records taking 10min time. - how to improve the performance and what are the possible ways to find where it is taking time ?

jaga2
New Contributor II

Same issue I am having read/write takes long time around 10hrs, data size was 21gb

Ajay-Pandey
Esteemed Contributor III

Hi @Suresh Kakarlapudi​ what is your file size??

Ajay Kumar Pandey

35 MB

Jfoxyyc
Valued Contributor

Is your databricks workspace set up as vnet injection by any chance?

Fadhi
New Contributor II

@Jfoxyyc i am having similar problem and cam across the post. Do vnet injection cause this as my workspace is set up like that

Lakshay
Databricks Employee
Databricks Employee

You should also look into the sql plan if the writing phase is indeed the part that is taking time. Since spark works on lazy evaluation, there might be some other phase that might be taking time

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group