โ12-08-2022 09:16 AM
I had been trying to upsert rows into a table in Azure Blob Storage (ADLS Gen 2) based on two partitions (sample code below).
insert overwrite table new_clicks_table partition(client_id, mm_date)
select
click_id
,user_id
,click_timestamp_gmt
,campaign_id
,site_url
,client_id
,mm_date
from
old_clicks_table
where
mm_date between '2022-12-01' and '2022-12-07'
and
client_id in (123456)
;
However, using "INSERT OVERWRITE" deleted all the data that was previously in that table, not just updating the data based on the give partitions (ie: all data in the table was deleted for client ID 123456 before 2022-12-01 and I was just left with data from 2022-12-01 thru 2022-12-07).
What's the easiest way to upsert this data into my table for only the specified partitions in my query and retain the rest of my data?
Can this be done using a combination of "INSERT" and "OVERWRITE" or do I need to use "MERGE"? If the latter, can someone provide the best example of how to write this data using "MERGE"?
โ12-08-2022 09:39 AM
HI @Alex Netelkosโ ,
Please check the dynamic partitions parameter, more details here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites
thanks,
Pat.
โ12-08-2022 09:39 AM
HI @Alex Netelkosโ ,
Please check the dynamic partitions parameter, more details here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites
thanks,
Pat.
โ12-08-2022 11:54 AM
Hi Pat,
That worked! Thanks so much for the easy to implement tip.
Is there a way for me to add that snippet to my Spark Config instead of having to put it in every single query I run?
Best wishes,
Netty
โ12-08-2022 12:02 PM
I think you should be able to set it once, at the top of your notebook for example. It's spark session parameter, it will be used until you will restart the cluster.
โ12-08-2022 09:31 PM
HI @Netty (Customer)โ ,
Please check the dynamic partitions parameter from here: https://docs.databricks.com/delta/selective-overwrite.html#dynamic-partition-overwrites
โ12-08-2022 11:37 PM
Below code might help you
Python-
(df.write
.mode("overwrite")
.option("partitionOverwriteMode", "dynamic")
.saveAsTable("default.people10m")
)
SQL-
SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group