โ02-23-2022 01:47 AM
We have configured workspace with own vpc. We need to extract data from DB2 and write as delta format. we tried to for 550k records with 230 columns, it took 50mins to complete the task. 15mn records takes more than 18hrs. Not sure why this takes such a long time to write. Appreciate a solution for this.
Code:
df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
df.write.mode("append").format("delta").partitionBy("YEAR", "MONTH", "DAY").save(delta_path)
โ02-23-2022 01:56 AM
Please increase parallelism by adjusting jdbc settings:
columnName="key",
lowerBound=1L,
upperBound=100000L,
numPartitions=100,
It is example values. The best that key column would be unique and continuous so it will be divided equally without data skews.
Please analyze also Spark UI - look what takes the biggest time (reading or writing?)
โ02-23-2022 01:56 AM
Please increase parallelism by adjusting jdbc settings:
columnName="key",
lowerBound=1L,
upperBound=100000L,
numPartitions=100,
It is example values. The best that key column would be unique and continuous so it will be divided equally without data skews.
Please analyze also Spark UI - look what takes the biggest time (reading or writing?)
โ02-23-2022 04:09 AM
Hi @Hubert Dudekโ , I think the Unique column should be integer not alphanumeric / string, right?
โ02-23-2022 04:53 AM
Hi @Hubert Dudekโ If we don't have unique column in integer/continuous. How this can be done ?
โ02-25-2022 05:41 AM
just try with numPartitions=100
โ02-23-2022 02:19 AM
Thanks Hubert for your input. i checked Spark UI, writing takes longer time.
Any link to check about increasing parallelism.
โ02-23-2022 04:21 AM
@Dhusanth Thangavadivelโ , In general how we configure the cluster is if we are planning to import data with 100 partitions. Then we need to make sure the cluster can spin up 100 threads.
It will also depend up on the DataBase, whether it will allow 100 connections at time.
What i observed is if any columns with huge text or blob data , then the write/read will be little bit slow.
โ02-25-2022 05:46 AM
every cpu process 1 partition at the time, other will wait. You can have autoscale like 2-8 executors every with 4 cpus, so it will process max 32 (4x8) partitions concurrently.
Please check also network configuration. Private link to connect to ADLS is recommended.
After df = spark.read.jdbc please verify partition number using df.rdd.getNumPartitions()
โ11-10-2022 03:14 AM
Hello. We face exactly the same issue. Reading is quick but writing takes long time. Just to clarify that it is about a table with only 700k rows. Any suggestions please? Thank you
remote_table = spark.read.format ( "jdbc" ) \
.option ( "driver" , "com.ibm.as400.access.AS400JDBCDriver") \
.option ( "url" , "url") \
.option ( "dbtable" , "table_name") \
.option ( "partitionColumn" , "ID") \
.option ( "lowerBound" , "0") \
.option ( "upperBound" , "700000") \
.option ( "numPartitions" , "1000") \
.option ( "user" , "user") \
.option ( "password" , "pass") \
.load ()
remote_table.write.format("delta").mode("overwrite") \
.option("overwriteSchema", "true") \
.partitionBy("ID") \
.saveAsTable("table_name")
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group