cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

data frame takes unusually long time to write for small data sets

Anonymous
Not applicable

We have configured workspace with own vpc. We need to extract data from DB2 and write as delta format. we tried to for 550k records with 230 columns, it took 50mins to complete the task. 15mn records takes more than 18hrs. Not sure why this takes such a long time to write. Appreciate a solution for this.

Code:

df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)

df.write.mode("append").format("delta").partitionBy("YEAR", "MONTH", "DAY").save(delta_path)

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

Please increase parallelism by adjusting jdbc settings:

columnName="key",

lowerBound=1L,

upperBound=100000L,

numPartitions=100,

It is example values. The best that key column would be unique and continuous so it will be divided equally without data skews.

Please analyze also Spark UI - look what takes the biggest time (reading or writing?)

View solution in original post

8 REPLIES 8

Hubert-Dudek
Esteemed Contributor III

Please increase parallelism by adjusting jdbc settings:

columnName="key",

lowerBound=1L,

upperBound=100000L,

numPartitions=100,

It is example values. The best that key column would be unique and continuous so it will be divided equally without data skews.

Please analyze also Spark UI - look what takes the biggest time (reading or writing?)

Hi @Hubert Dudekโ€‹ , I think the Unique column should be integer not alphanumeric / string, right?

Anonymous
Not applicable

Hi @Hubert Dudekโ€‹ If we don't have unique column in integer/continuous. How this can be done ?

Hubert-Dudek
Esteemed Contributor III

just try with numPartitions=100

Anonymous
Not applicable

Thanks Hubert for your input. i checked Spark UI, writing takes longer time.

Any link to check about increasing parallelism.

RKNutalapati
Valued Contributor

@Dhusanth Thangavadivelโ€‹ , In general how we configure the cluster is if we are planning to import data with 100 partitions. Then we need to make sure the cluster can spin up 100 threads.

It will also depend up on the DataBase, whether it will allow 100 connections at time.

What i observed is if any columns with huge text or blob data , then the write/read will be little bit slow.

Hubert-Dudek
Esteemed Contributor III

every cpu process 1 partition at the time, other will wait. You can have autoscale like 2-8 executors every with 4 cpus, so it will process max 32 (4x8) partitions concurrently.

Please check also network configuration. Private link to connect to ADLS is recommended.

After df = spark.read.jdbc please verify partition number using df.rdd.getNumPartitions()

elgeo
Valued Contributor II

Hello. We face exactly the same issue. Reading is quick but writing takes long time. Just to clarify that it is about a table with only 700k rows. Any suggestions please? Thank you

remote_table = spark.read.format ( "jdbc" ) \

.option ( "driver" , "com.ibm.as400.access.AS400JDBCDriver") \

.option ( "url" , "url") \

.option ( "dbtable" , "table_name") \

.option ( "partitionColumn" , "ID") \

.option ( "lowerBound" , "0") \

.option ( "upperBound" , "700000") \

.option ( "numPartitions" , "1000") \

.option ( "user" , "user") \

.option ( "password" , "pass") \

.load ()

remote_table.write.format("delta").mode("overwrite") \

                 .option("overwriteSchema", "true") \

                 .partitionBy("ID") \

                 .saveAsTable("table_name")

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group