cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Using DeltaTable.merge() and generating surrogate keys on insert?

Dekova
New Contributor II

I'm using merge to upsert data into a table:

DeltaTable
.forName(DESTINATION_TABLE)
.as("target")
.merge(merge_df.as("source") ,"source.topic = target.topic and source.key = target.key")
.whenMatched()
.updateAll()
.whenNotMatched()
.insertAll()
.execute()

Id like to use one of the following generated columns to create surrogate keys on insert:

  1. surrogate_guid string generated always as (uuid())
  2. surrogate_id bigint generated always as identity

#1 doesn't work, I get the error "A generated column cannot use a non-deterministic expression".
#2 doesn't work, I get the error "cannot resolve surrogate_id in UPDATE clause"

What's best practice for merging and getting a unique identifier (preferably uuid) assigned?

1 ACCEPTED SOLUTION

Accepted Solutions

daniel_sahal
Esteemed Contributor

@Dekova 
1) uuid() is non-deterministic meaning that it will give you different result each time you run this function
2) Per the documentation "For Databricks Runtime 9.1 and above, MERGE operations support generated columns when you set spark.databricks.delta.schema.autoMerge.enabled to true."

What i would do in this situtaion is:
- Create a column surrogate_id bigint GENERATED ALWAYS AS identity,
- Create a column surrogate_guid and hash it based on surrogate_id column. Ex. surrogate_guid string GENERATED ALWAYS AS (sha2(cast(surrogate_id AS STRING), 512))

View solution in original post

1 REPLY 1

daniel_sahal
Esteemed Contributor

@Dekova 
1) uuid() is non-deterministic meaning that it will give you different result each time you run this function
2) Per the documentation "For Databricks Runtime 9.1 and above, MERGE operations support generated columns when you set spark.databricks.delta.schema.autoMerge.enabled to true."

What i would do in this situtaion is:
- Create a column surrogate_id bigint GENERATED ALWAYS AS identity,
- Create a column surrogate_guid and hash it based on surrogate_id column. Ex. surrogate_guid string GENERATED ALWAYS AS (sha2(cast(surrogate_id AS STRING), 512))