cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Writing to Delta tables/files is taking a long time

wissamimad
New Contributor

I have a dataframe that is a series of transformation of big data (167 million rows) and I want to write it to delta files and tables using the below :

 

 

try:

    (df_new.write.format('delta')

     .option("delta.minReaderVersion", "2")

     .option("delta.minWriterVersion", "5")

     .option("spark.databricks.delta.optimizeWrite.enabled",True)

     .option("delta.columnMapping.mode", "name")

     .mode('overwrite')

     .option("overwriteSchema", True)

     .save(f'/mnt/mymountpoint/Gold_tables/tasoapplans'))

    try:

        df_new.write.insertInto('Gold_tables.tasoapplans', overwrite=True)

    except:

        spark.sql("create table IF NOT EXISTS Gold_tables.tasoapplans using delta location '/mnt/mymountpoint/Gold_tables/tasoapplans'")

except Exception as e:

    dbutils.notebook.exit(str(e))

 

 

But the writing is taking too much time(query = 1 hour, writing 1 hour 30 minutes)
Cluster used is :
Memory optimized cluster Standard_DS12_v2 (28GB memroy,4 Cores)

Use photon Acceleration

min workers:2

max workers:8

 

How can I improve the writing?

1 REPLY 1

prasu1222
New Contributor II

Hi @Retired_mod I am having the same issue where i made a inner join on two spark dataframes they are running only a single node not sure how to modify to run on many nodes and same thing with when i write a 30 gb data to a delta table it is almost 3 hours still executing how we can reduce the time 

it is simple join of two tables first table has 50 millon records and second table has 300k records and inner join took 20 minutes and I want to save this a new delta table

 here is the code

result_df = Invoice_Data.join(Fixed_df, on=['Code', 'item_no', 'supplier_no'], how='inner')
 
result_df.write.option("overwriteSchema", "true").format("delta").mode("overwrite").save("abfss://data@abc.dfs.core.windows.net/features/MCA")
 
Attached the metrics time  let me know how we can optimize it

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group