cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

What's the easiest way to upsert data into a table? (Azure ADLS Gen2)

Netty
New Contributor III

I had been trying to upsert rows into a table in Azure Blob Storage (ADLS Gen 2) based on two partitions (sample code below).

insert overwrite table new_clicks_table partition(client_id, mm_date)
select 
	click_id
	,user_id
	,click_timestamp_gmt
	,campaign_id
	,site_url
	,client_id
	,mm_date
from 
	old_clicks_table
where 
        mm_date between '2022-12-01' and '2022-12-07'
and 
        client_id in (123456)
;

However, using "INSERT OVERWRITE" deleted all the data that was previously in that table, not just updating the data based on the give partitions (ie: all data in the table was deleted for client ID 123456 before 2022-12-01 and I was just left with data from 2022-12-01 thru 2022-12-07).

What's the easiest way to upsert this data into my table for only the specified partitions in my query and retain the rest of my data?

Can this be done using a combination of "INSERT" and "OVERWRITE" or do I need to use "MERGE"? If the latter, can someone provide the best example of how to write this data using "MERGE"?

1 ACCEPTED SOLUTION

Accepted Solutions

Pat
Honored Contributor III

HI @Alex Netelkosโ€‹ ,

Please check the dynamic partitions parameter, more details here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites

thanks,

Pat.

View solution in original post

5 REPLIES 5

Pat
Honored Contributor III

HI @Alex Netelkosโ€‹ ,

Please check the dynamic partitions parameter, more details here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites

thanks,

Pat.

Netty
New Contributor III

Hi Pat,

That worked! Thanks so much for the easy to implement tip.

Is there a way for me to add that snippet to my Spark Config instead of having to put it in every single query I run?

Best wishes,

Netty

Pat
Honored Contributor III

I think you should be able to set it once, at the top of your notebook for example. It's spark session parameter, it will be used until you will restart the cluster.

Rishabh264
Honored Contributor II

HI @Netty (Customer)โ€‹ ,

Please check the dynamic partitions parameter from here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites

Ajay-Pandey
Esteemed Contributor III

Below code might help you

Python-
(df.write
  .mode("overwrite")
  .option("partitionOverwriteMode", "dynamic")
  .saveAsTable("default.people10m")
)
 
SQL-
SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.