Databricks Academy has updated the Data engineering professional pathway. The course Advanced Techniques with Spark Declarative Pipelines is now live, officially replacing the previous course, Databricks Streaming and Lakeflow Spark Declarative Pipelines. This new release now serves as Course 1 in the Advanced Data Engineering with Databricks series.
You’ll learn to:
- Build clean multi‑source pipelines: Ingest data from many sources (like CSV and JSON) into one clean Bronze table.
- Optimize layout and quality: Use Liquid Clustering, Data Quality checks, and Multiplex Streaming to handle mixed‑schema events.
- Automate history tracking: Use AUTO CDC INTO for SCD Type 2 pipelines to track historical events.
- Cross-platform access: Build Delta Sinks and enable Iceberg reads via Delta UniForm for analytics across platforms.
- Protect pipeline with quarantine flows: Design quarantine pipelines to safely catch bad records, monitor violations, and manage schema evolution.
- Protect pipeline with Quarantine flows: Design Quarantine pipelines to safely route invalid records, monitor violation metrics, and manage schema evolution safely.
Designed for:
Course format & details:
Syllabus: 2 Sections | 17 Lessons
Duration: 2 hours 00 minutes
Skill Level: Professional
Cost: Free
Designed for Databricks Data Intelligence Platform (latest DBR)
🔗 Enroll Now 👈