cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Writing to Delta tables/files is taking a long time

wissamimad
New Contributor

I have a dataframe that is a series of transformation of big data (167 million rows) and I want to write it to delta files and tables using the below :

 

 

try:

    (df_new.write.format('delta')

     .option("delta.minReaderVersion", "2")

     .option("delta.minWriterVersion", "5")

     .option("spark.databricks.delta.optimizeWrite.enabled",True)

     .option("delta.columnMapping.mode", "name")

     .mode('overwrite')

     .option("overwriteSchema", True)

     .save(f'/mnt/mymountpoint/Gold_tables/tasoapplans'))

    try:

        df_new.write.insertInto('Gold_tables.tasoapplans', overwrite=True)

    except:

        spark.sql("create table IF NOT EXISTS Gold_tables.tasoapplans using delta location '/mnt/mymountpoint/Gold_tables/tasoapplans'")

except Exception as e:

    dbutils.notebook.exit(str(e))

 

 

But the writing is taking too much time(query = 1 hour, writing 1 hour 30 minutes)
Cluster used is :
Memory optimized cluster Standard_DS12_v2 (28GB memroy,4 Cores)

Use photon Acceleration

min workers:2

max workers:8

 

How can I improve the writing?

2 REPLIES 2

Kaniz
Community Manager
Community Manager

Hi @wissamimad , 

- Increase the number of workers in your cluster to allow more tasks to be executed in parallel
- Partition your data on a certain column to speed up writing and future read operations
- Use operations like INSERT INTOCTASCOPY INTO from Parquet format, and spark.write.format("delta").mode("append") to automatically cluster data on write
- Frequent running of OPTIMIZE to ensure efficient clustering of data
- Use a Delta writer client version 13.2 and above that supports all Delta write protocol table features used by liquid clustering
- Tune Spark configurations related to shuffle, memory, and I/O to improve performance

prasu1222
New Contributor II

Hi @Kaniz I am having the same issue where i made a inner join on two spark dataframes they are running only a single node not sure how to modify to run on many nodes and same thing with when i write a 30 gb data to a delta table it is almost 3 hours still executing how we can reduce the time 

it is simple join of two tables first table has 50 millon records and second table has 300k records and inner join took 20 minutes and I want to save this a new delta table

 here is the code

result_df = Invoice_Data.join(Fixed_df, on=['Code', 'item_no', 'supplier_no'], how='inner')
 
result_df.write.option("overwriteSchema", "true").format("delta").mode("overwrite").save("abfss://data@abc.dfs.core.windows.net/features/MCA")
 
Attached the metrics time  let me know how we can optimize it
Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.