cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

What's the easiest way to upsert data into a table? (Azure ADLS Gen2)

Netty
New Contributor III

I had been trying to upsert rows into a table in Azure Blob Storage (ADLS Gen 2) based on two partitions (sample code below).

insert overwrite table new_clicks_table partition(client_id, mm_date)
select 
	click_id
	,user_id
	,click_timestamp_gmt
	,campaign_id
	,site_url
	,client_id
	,mm_date
from 
	old_clicks_table
where 
        mm_date between '2022-12-01' and '2022-12-07'
and 
        client_id in (123456)
;

However, using "INSERT OVERWRITE" deleted all the data that was previously in that table, not just updating the data based on the give partitions (ie: all data in the table was deleted for client ID 123456 before 2022-12-01 and I was just left with data from 2022-12-01 thru 2022-12-07).

What's the easiest way to upsert this data into my table for only the specified partitions in my query and retain the rest of my data?

Can this be done using a combination of "INSERT" and "OVERWRITE" or do I need to use "MERGE"? If the latter, can someone provide the best example of how to write this data using "MERGE"?

1 ACCEPTED SOLUTION

Accepted Solutions

Pat
Honored Contributor III

HI @Alex Netelkos​ ,

Please check the dynamic partitions parameter, more details here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites

thanks,

Pat.

View solution in original post

5 REPLIES 5

Pat
Honored Contributor III

HI @Alex Netelkos​ ,

Please check the dynamic partitions parameter, more details here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites

thanks,

Pat.

Netty
New Contributor III

Hi Pat,

That worked! Thanks so much for the easy to implement tip.

Is there a way for me to add that snippet to my Spark Config instead of having to put it in every single query I run?

Best wishes,

Netty

Pat
Honored Contributor III

I think you should be able to set it once, at the top of your notebook for example. It's spark session parameter, it will be used until you will restart the cluster.

Rishabh264
Honored Contributor II

HI @Netty (Customer)​ ,

Please check the dynamic partitions parameter from here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites

Rishabh Pandey

Ajay-Pandey
Esteemed Contributor III

Below code might help you

Python-
(df.write
  .mode("overwrite")
  .option("partitionOverwriteMode", "dynamic")
  .saveAsTable("default.people10m")
)
 
SQL-
SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!