<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to create a surrogate key sequence which I can use in SCD cases? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-create-a-surrogate-key-sequence-which-i-can-use-in-scd/m-p/28196#M20019</link>
    <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Hi &lt;A href="https://users/30242/pascalvanbellen.html" target="_blank"&gt;@pascalvanbellen&lt;/A&gt; ,There is no concept of FK, PK, SK in Spark. But Databricks Delta automatically takes care of SCD type scenarios. &lt;A href="https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html#slowly-changing-data-scd-type-2" target="test_blank"&gt;https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html#slowly-changing-data-scd-type-2&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;If you want to make up SK concept in Spark here, please read my blog.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://hadoopist.wordpress.com/2016/05/24/generate-unique-ids-for-each-rows-in-a-spark-dataframe/" target="test_blank"&gt;https://hadoopist.wordpress.com/2016/05/24/generate-unique-ids-for-each-rows-in-a-spark-dataframe/&lt;/A&gt;&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 17 Apr 2019 21:43:39 GMT</pubDate>
    <dc:creator>girivaratharaja</dc:creator>
    <dc:date>2019-04-17T21:43:39Z</dc:date>
    <item>
      <title>How to create a surrogate key sequence which I can use in SCD cases?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-create-a-surrogate-key-sequence-which-i-can-use-in-scd/m-p/28195#M20018</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Hi Community &lt;/P&gt;
&lt;P&gt;I would like to know if there is an option to create an integer sequence which persists even if the cluster is shut down. My target is to use this integer value as a surrogate key to join different tables or do Slowly changing dimension cases.&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Apr 2019 06:50:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-create-a-surrogate-key-sequence-which-i-can-use-in-scd/m-p/28195#M20018</guid>
      <dc:creator>Pascalvan_Belle</dc:creator>
      <dc:date>2019-04-17T06:50:04Z</dc:date>
    </item>
    <item>
      <title>Re: How to create a surrogate key sequence which I can use in SCD cases?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-create-a-surrogate-key-sequence-which-i-can-use-in-scd/m-p/28196#M20019</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Hi &lt;A href="https://users/30242/pascalvanbellen.html" target="_blank"&gt;@pascalvanbellen&lt;/A&gt; ,There is no concept of FK, PK, SK in Spark. But Databricks Delta automatically takes care of SCD type scenarios. &lt;A href="https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html#slowly-changing-data-scd-type-2" target="test_blank"&gt;https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html#slowly-changing-data-scd-type-2&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;If you want to make up SK concept in Spark here, please read my blog.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://hadoopist.wordpress.com/2016/05/24/generate-unique-ids-for-each-rows-in-a-spark-dataframe/" target="test_blank"&gt;https://hadoopist.wordpress.com/2016/05/24/generate-unique-ids-for-each-rows-in-a-spark-dataframe/&lt;/A&gt;&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Apr 2019 21:43:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-create-a-surrogate-key-sequence-which-i-can-use-in-scd/m-p/28196#M20019</guid>
      <dc:creator>girivaratharaja</dc:creator>
      <dc:date>2019-04-17T21:43:39Z</dc:date>
    </item>
  </channel>
</rss>

