<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Delta Live Table (Real Time Usage &amp;amp; Application) in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/delta-live-table-real-time-usage-amp-application/m-p/94990#M4462</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Delta Live Tables are the Hot Topic in Data Field, innovation by&lt;/SPAN&gt;&amp;nbsp;Databricks.&lt;SPAN&gt;&amp;nbsp;Delta Live Table is a Declarative ETL framework. In ETL two types of ETL frame works are there -&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1) procedure ETL 2)Declarative ETL&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1)procedure ETL- it involves writing code that explicitly outlines the steps to transform data from source to target. It is a more hands-on approach that requires developers to define each steps of the ETL process. Example-Informatica, Talend, SSIS.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;2)Declarative ETL- Declarative ETL is a more abstract approach that focus on defining the desired outcome of the ETL process. In declarative ETL the developer defines the desired end state of the ETL tool automatically generates the code to transform the data into end state. Example-ADF,Aws Glue, DLT.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Main advantage of DLT-&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1) version control.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;2) Deployment.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;3)Data Quality checks.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;4)Governance.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;5)Delta engine automatically handles the complex task of Data ingestion,Data merging, Schema evaluation also.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;6)Use&amp;nbsp;Auto Loader&amp;nbsp;and&amp;nbsp;streaming tables&amp;nbsp;to incrementally land data into the Bronze layer for DLT pipelines or Databricks SQL queries.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;In DLT two env modes support -1) Development 2)Production&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;DLT Pipeline Refresh modes- 1)Continuous 2)Triggered&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;If the pipeline uses the&amp;nbsp;triggered&amp;nbsp;execution mode, the system stops processing after successfully refreshing all tables or selected tables in the pipeline once, ensuring each table that is part of the update is updated based on the data available when the update started.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;If the pipeline uses&amp;nbsp;continuous&amp;nbsp;execution, Delta Live Tables processes new data as it arrives in data sources to keep tables throughout the pipeline fresh.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Here in the below example created one sample DLT Pipeline notebook and it is following the medallion architecture-&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1)Ingesting the data into bronze layer (using the autoloader, csv &amp;amp; json file ingestion).&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;2)Ingesting the data into Silver Layer from Bronze Layer and check constraints to check the data quality and Data cleaning and transformation.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;3)Gold layer table preparation and more refined data and sharing to the Bi &amp;amp; ML team.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Disadvantage - DLT supports all tables in one schema/db, suppose you want to create the bronze,silver, golden layer tables in diff schema/db , you cann't implement this thing by DLT. all bronze, silver and golden layer tables you have to create in one db/schema.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Sat, 19 Oct 2024 08:11:28 GMT</pubDate>
    <dc:creator>Sourav7890</dc:creator>
    <dc:date>2024-10-19T08:11:28Z</dc:date>
    <item>
      <title>Delta Live Table (Real Time Usage &amp; Application)</title>
      <link>https://community.databricks.com/t5/get-started-discussions/delta-live-table-real-time-usage-amp-application/m-p/94990#M4462</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Delta Live Tables are the Hot Topic in Data Field, innovation by&lt;/SPAN&gt;&amp;nbsp;Databricks.&lt;SPAN&gt;&amp;nbsp;Delta Live Table is a Declarative ETL framework. In ETL two types of ETL frame works are there -&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1) procedure ETL 2)Declarative ETL&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1)procedure ETL- it involves writing code that explicitly outlines the steps to transform data from source to target. It is a more hands-on approach that requires developers to define each steps of the ETL process. Example-Informatica, Talend, SSIS.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;2)Declarative ETL- Declarative ETL is a more abstract approach that focus on defining the desired outcome of the ETL process. In declarative ETL the developer defines the desired end state of the ETL tool automatically generates the code to transform the data into end state. Example-ADF,Aws Glue, DLT.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Main advantage of DLT-&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1) version control.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;2) Deployment.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;3)Data Quality checks.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;4)Governance.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;5)Delta engine automatically handles the complex task of Data ingestion,Data merging, Schema evaluation also.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;6)Use&amp;nbsp;Auto Loader&amp;nbsp;and&amp;nbsp;streaming tables&amp;nbsp;to incrementally land data into the Bronze layer for DLT pipelines or Databricks SQL queries.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;In DLT two env modes support -1) Development 2)Production&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;DLT Pipeline Refresh modes- 1)Continuous 2)Triggered&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;If the pipeline uses the&amp;nbsp;triggered&amp;nbsp;execution mode, the system stops processing after successfully refreshing all tables or selected tables in the pipeline once, ensuring each table that is part of the update is updated based on the data available when the update started.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;If the pipeline uses&amp;nbsp;continuous&amp;nbsp;execution, Delta Live Tables processes new data as it arrives in data sources to keep tables throughout the pipeline fresh.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Here in the below example created one sample DLT Pipeline notebook and it is following the medallion architecture-&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;1)Ingesting the data into bronze layer (using the autoloader, csv &amp;amp; json file ingestion).&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;2)Ingesting the data into Silver Layer from Bronze Layer and check constraints to check the data quality and Data cleaning and transformation.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;3)Gold layer table preparation and more refined data and sharing to the Bi &amp;amp; ML team.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Disadvantage - DLT supports all tables in one schema/db, suppose you want to create the bronze,silver, golden layer tables in diff schema/db , you cann't implement this thing by DLT. all bronze, silver and golden layer tables you have to create in one db/schema.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 19 Oct 2024 08:11:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/delta-live-table-real-time-usage-amp-application/m-p/94990#M4462</guid>
      <dc:creator>Sourav7890</dc:creator>
      <dc:date>2024-10-19T08:11:28Z</dc:date>
    </item>
  </channel>
</rss>

