Hi @Aditya Deshpande There is no locking mechanism of PK in Delta. You can use row_number() function on the df and save using delta and do a distinct() before the write.
Hi @pascalvanbellen ,There is no concept of FK, PK, SK in Spark. But Databricks Delta automatically takes care of SCD type scenarios. https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html#slowly-changing-data-scd-type-2
...
@Nithin Tiruveedhi Please try as below. Below is an example for word count logic.val tmpTable1 = sqlContext.sql("select row_number() over (order by count) as rnk,word,count from wordcount")tmpTable1.registertempTable("wordcount_rownum")sqlContext.ca...