<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: UC upgrade in Spark Streaming jobs in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/uc-upgrade-in-spark-streaming-jobs/m-p/113277#M44495</link>
    <description>&lt;P&gt;Hi&amp;nbsp;Vetrivel,&lt;/P&gt;&lt;P&gt;How are you doing today?, As per my understanding,&amp;nbsp;Upgrading from Hive Metastore (HMS) to Unity Catalog (UC) for structured streaming jobs needs a careful approach to avoid failures or data duplication. The best way is to first pause all streaming jobs, then migrate your tables to UC while making sure table locations and checkpoint directories stay the same. After that, update your jobs to use the new UC table names (like catalog.schema.table), and then restart the jobs using the same checkpoints so they continue from where they left off. It’s a good idea to test everything in dev or staging first, check for any issues, and only then move to production. Also, using views or table aliases can make the transition smoother with minimal code changes. Let me know if you need help setting this up or want a sample migration plan!&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Brahma&lt;/P&gt;</description>
    <pubDate>Fri, 21 Mar 2025 11:23:00 GMT</pubDate>
    <dc:creator>Brahmareddy</dc:creator>
    <dc:date>2025-03-21T11:23:00Z</dc:date>
    <item>
      <title>UC upgrade in Spark Streaming jobs</title>
      <link>https://community.databricks.com/t5/data-engineering/uc-upgrade-in-spark-streaming-jobs/m-p/113251#M44479</link>
      <description>&lt;P&gt;Kindly share the recommended approach for upgrading from HMS to UC for structured streaming jobs, ensuring seamless execution without any failures or data duplication? I would also appreciate insights into any best practices you have followed during similar upgrades.&lt;/P&gt;</description>
      <pubDate>Fri, 21 Mar 2025 04:09:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uc-upgrade-in-spark-streaming-jobs/m-p/113251#M44479</guid>
      <dc:creator>Vetrivel</dc:creator>
      <dc:date>2025-03-21T04:09:44Z</dc:date>
    </item>
    <item>
      <title>Re: UC upgrade in Spark Streaming jobs</title>
      <link>https://community.databricks.com/t5/data-engineering/uc-upgrade-in-spark-streaming-jobs/m-p/113277#M44495</link>
      <description>&lt;P&gt;Hi&amp;nbsp;Vetrivel,&lt;/P&gt;&lt;P&gt;How are you doing today?, As per my understanding,&amp;nbsp;Upgrading from Hive Metastore (HMS) to Unity Catalog (UC) for structured streaming jobs needs a careful approach to avoid failures or data duplication. The best way is to first pause all streaming jobs, then migrate your tables to UC while making sure table locations and checkpoint directories stay the same. After that, update your jobs to use the new UC table names (like catalog.schema.table), and then restart the jobs using the same checkpoints so they continue from where they left off. It’s a good idea to test everything in dev or staging first, check for any issues, and only then move to production. Also, using views or table aliases can make the transition smoother with minimal code changes. Let me know if you need help setting this up or want a sample migration plan!&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Brahma&lt;/P&gt;</description>
      <pubDate>Fri, 21 Mar 2025 11:23:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uc-upgrade-in-spark-streaming-jobs/m-p/113277#M44495</guid>
      <dc:creator>Brahmareddy</dc:creator>
      <dc:date>2025-03-21T11:23:00Z</dc:date>
    </item>
  </channel>
</rss>

