cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

data frame takes unusually long time to write for small data sets

Anonymous
Not applicable

We have configured workspace with own vpc. We need to extract data from DB2 and write as delta format. we tried to for 550k records with 230 columns, it took 50mins to complete the task. 15mn records takes more than 18hrs. Not sure why this takes such a long time to write. Appreciate a solution for this.

Code:

df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)

df.write.mode("append").format("delta").partitionBy("YEAR", "MONTH", "DAY").save(delta_path)

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

Please increase parallelism by adjusting jdbc settings:

columnName="key",

lowerBound=1L,

upperBound=100000L,

numPartitions=100,

It is example values. The best that key column would be unique and continuous so it will be divided equally without data skews.

Please analyze also Spark UI - look what takes the biggest time (reading or writing?)

View solution in original post

9 REPLIES 9

Hubert-Dudek
Esteemed Contributor III

Please increase parallelism by adjusting jdbc settings:

columnName="key",

lowerBound=1L,

upperBound=100000L,

numPartitions=100,

It is example values. The best that key column would be unique and continuous so it will be divided equally without data skews.

Please analyze also Spark UI - look what takes the biggest time (reading or writing?)

Hi @Hubert Dudek​ , I think the Unique column should be integer not alphanumeric / string, right?

Anonymous
Not applicable

Hi @Hubert Dudek​ If we don't have unique column in integer/continuous. How this can be done ?

Hubert-Dudek
Esteemed Contributor III

just try with numPartitions=100

Anonymous
Not applicable

Thanks Hubert for your input. i checked Spark UI, writing takes longer time.

Any link to check about increasing parallelism.

RKNutalapati
Valued Contributor

@Dhusanth Thangavadivel​ , In general how we configure the cluster is if we are planning to import data with 100 partitions. Then we need to make sure the cluster can spin up 100 threads.

It will also depend up on the DataBase, whether it will allow 100 connections at time.

What i observed is if any columns with huge text or blob data , then the write/read will be little bit slow.

Hubert-Dudek
Esteemed Contributor III

every cpu process 1 partition at the time, other will wait. You can have autoscale like 2-8 executors every with 4 cpus, so it will process max 32 (4x8) partitions concurrently.

Please check also network configuration. Private link to connect to ADLS is recommended.

After df = spark.read.jdbc please verify partition number using df.rdd.getNumPartitions()

Kaniz
Community Manager
Community Manager

Hi @Dhusanth Thangavadivel​ , How about you tell us if your problem was solved?

elgeo
Valued Contributor II

Hello. We face exactly the same issue. Reading is quick but writing takes long time. Just to clarify that it is about a table with only 700k rows. Any suggestions please? Thank you

remote_table = spark.read.format ( "jdbc" ) \

.option ( "driver" , "com.ibm.as400.access.AS400JDBCDriver") \

.option ( "url" , "url") \

.option ( "dbtable" , "table_name") \

.option ( "partitionColumn" , "ID") \

.option ( "lowerBound" , "0") \

.option ( "upperBound" , "700000") \

.option ( "numPartitions" , "1000") \

.option ( "user" , "user") \

.option ( "password" , "pass") \

.load ()

remote_table.write.format("delta").mode("overwrite") \

                 .option("overwriteSchema", "true") \

                 .partitionBy("ID") \

                 .saveAsTable("table_name")

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.