<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Learning Series | Advanced Techniques with Spark Declarative Pipelines in Announcements</title>
    <link>https://community.databricks.com/t5/announcements/learning-series-advanced-techniques-with-spark-declarative/m-p/154092#M724</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Databricks Academy has updated the &lt;/SPAN&gt;&lt;STRONG&gt;Data engineering professional pathway&lt;/STRONG&gt;&lt;SPAN&gt;. The course &lt;/SPAN&gt;&lt;A href="https://customer-academy.databricks.com/learn/courses/2972/advanced-techniques-with-spark-declarative-pipelines" target="_blank"&gt;&lt;STRONG&gt;Advanced Techniques with Spark Declarative Pipelines&lt;/STRONG&gt;&lt;/A&gt;&lt;SPAN&gt; is now live, officially replacing the previous course, &lt;FONT color="#808080"&gt;Databricks Streaming and Lakeflow Spark Declarative Pipelines&lt;/FONT&gt;. This new release now serves as &lt;/SPAN&gt;&lt;STRONG&gt;Course 1&lt;/STRONG&gt;&lt;SPAN&gt; in the &lt;/SPAN&gt;&lt;STRONG&gt;Advanced Data Engineering with Databricks&lt;/STRONG&gt;&lt;SPAN&gt; series.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;You’ll learn to:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Build clean multi‑source pipelines&lt;/STRONG&gt;&lt;SPAN&gt;: Ingest data from many sources (like CSV and JSON) into one clean Bronze table.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Optimize layout and quality&lt;/STRONG&gt;&lt;SPAN&gt;: Use Liquid Clustering, Data Quality checks, and Multiplex Streaming to handle mixed‑schema events.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Automate history tracking: &lt;/STRONG&gt;&lt;SPAN&gt;Use AUTO CDC INTO for SCD Type 2 pipelines to track historical events.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Cross-platform access: &lt;/STRONG&gt;&lt;SPAN&gt;Build Delta Sinks and enable Iceberg reads via Delta UniForm for analytics across platforms.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Protect pipeline with quarantine flows&lt;/STRONG&gt;&lt;SPAN&gt;: Design quarantine pipelines to safely catch bad records, monitor violations, and manage schema evolution.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Protect pipeline with Quarantine flows:&lt;/STRONG&gt;&lt;SPAN&gt; Design Quarantine pipelines to safely route invalid records, monitor violation metrics, and manage schema evolution safely.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Designed for:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Data engineers who’ve completed “&lt;/SPAN&gt;&lt;A href="https://customer-academy.databricks.com/learn/courses/2971/build-data-pipelines-with-lakeflow-spark-declarative-pipelines" target="_blank"&gt;&lt;SPAN&gt;Build Data Pipelines with Lakeflow Spark Declarative Pipelines&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;” or have equivalent SDP experience.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;SQL/Python and Delta Lake users leveling up to professional‑grade lakehouse engineering.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Teams handling CDC, data quality, and live production pipelines on Databricks.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Course format &amp;amp; details:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Syllabus: 2 Sections | 17 Lessons&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Duration: 2 hours 00 minutes&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Skill Level: Professional&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Cost: Free&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Designed for Databricks Data Intelligence Platform (latest DBR)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p8i6j01 paragraph"&gt;&lt;A style="background-color: #ff3621; color: white; padding: 10px 20px; text-decoration: none; border-radius: 5px; font-weight: bold; display: inline-block;" href="https://customer-academy.databricks.com/learn/courses/2972/advanced-techniques-with-spark-declarative-pipelines" target="_blank" rel="noopener"&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; Enroll Now &lt;span class="lia-unicode-emoji" title=":backhand_index_pointing_left:"&gt;👈&lt;/span&gt;&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 10 Apr 2026 16:39:52 GMT</pubDate>
    <dc:creator>Tushar_Parekar</dc:creator>
    <dc:date>2026-04-10T16:39:52Z</dc:date>
    <item>
      <title>Learning Series | Advanced Techniques with Spark Declarative Pipelines</title>
      <link>https://community.databricks.com/t5/announcements/learning-series-advanced-techniques-with-spark-declarative/m-p/154092#M724</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Databricks Academy has updated the &lt;/SPAN&gt;&lt;STRONG&gt;Data engineering professional pathway&lt;/STRONG&gt;&lt;SPAN&gt;. The course &lt;/SPAN&gt;&lt;A href="https://customer-academy.databricks.com/learn/courses/2972/advanced-techniques-with-spark-declarative-pipelines" target="_blank"&gt;&lt;STRONG&gt;Advanced Techniques with Spark Declarative Pipelines&lt;/STRONG&gt;&lt;/A&gt;&lt;SPAN&gt; is now live, officially replacing the previous course, &lt;FONT color="#808080"&gt;Databricks Streaming and Lakeflow Spark Declarative Pipelines&lt;/FONT&gt;. This new release now serves as &lt;/SPAN&gt;&lt;STRONG&gt;Course 1&lt;/STRONG&gt;&lt;SPAN&gt; in the &lt;/SPAN&gt;&lt;STRONG&gt;Advanced Data Engineering with Databricks&lt;/STRONG&gt;&lt;SPAN&gt; series.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;You’ll learn to:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Build clean multi‑source pipelines&lt;/STRONG&gt;&lt;SPAN&gt;: Ingest data from many sources (like CSV and JSON) into one clean Bronze table.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Optimize layout and quality&lt;/STRONG&gt;&lt;SPAN&gt;: Use Liquid Clustering, Data Quality checks, and Multiplex Streaming to handle mixed‑schema events.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Automate history tracking: &lt;/STRONG&gt;&lt;SPAN&gt;Use AUTO CDC INTO for SCD Type 2 pipelines to track historical events.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Cross-platform access: &lt;/STRONG&gt;&lt;SPAN&gt;Build Delta Sinks and enable Iceberg reads via Delta UniForm for analytics across platforms.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Protect pipeline with quarantine flows&lt;/STRONG&gt;&lt;SPAN&gt;: Design quarantine pipelines to safely catch bad records, monitor violations, and manage schema evolution.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Protect pipeline with Quarantine flows:&lt;/STRONG&gt;&lt;SPAN&gt; Design Quarantine pipelines to safely route invalid records, monitor violation metrics, and manage schema evolution safely.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Designed for:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Data engineers who’ve completed “&lt;/SPAN&gt;&lt;A href="https://customer-academy.databricks.com/learn/courses/2971/build-data-pipelines-with-lakeflow-spark-declarative-pipelines" target="_blank"&gt;&lt;SPAN&gt;Build Data Pipelines with Lakeflow Spark Declarative Pipelines&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;” or have equivalent SDP experience.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;SQL/Python and Delta Lake users leveling up to professional‑grade lakehouse engineering.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Teams handling CDC, data quality, and live production pipelines on Databricks.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Course format &amp;amp; details:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Syllabus: 2 Sections | 17 Lessons&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Duration: 2 hours 00 minutes&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Skill Level: Professional&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Cost: Free&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Designed for Databricks Data Intelligence Platform (latest DBR)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p8i6j01 paragraph"&gt;&lt;A style="background-color: #ff3621; color: white; padding: 10px 20px; text-decoration: none; border-radius: 5px; font-weight: bold; display: inline-block;" href="https://customer-academy.databricks.com/learn/courses/2972/advanced-techniques-with-spark-declarative-pipelines" target="_blank" rel="noopener"&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; Enroll Now &lt;span class="lia-unicode-emoji" title=":backhand_index_pointing_left:"&gt;👈&lt;/span&gt;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 16:39:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/announcements/learning-series-advanced-techniques-with-spark-declarative/m-p/154092#M724</guid>
      <dc:creator>Tushar_Parekar</dc:creator>
      <dc:date>2026-04-10T16:39:52Z</dc:date>
    </item>
  </channel>
</rss>

