<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Working of @DLT.table in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117534#M45519</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/24053"&gt;@lingareddy_Alva&lt;/a&gt;&amp;nbsp;Thanks for the help.&lt;/P&gt;</description>
    <pubDate>Fri, 02 May 2025 16:47:44 GMT</pubDate>
    <dc:creator>_singh_vish</dc:creator>
    <dc:date>2025-05-02T16:47:44Z</dc:date>
    <item>
      <title>Working of @DLT.table</title>
      <link>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117431#M45494</link>
      <description>&lt;P&gt;I am using&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table decorator to create a table which will store history for my tables.&lt;/P&gt;&lt;P&gt;My code works like this:&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table(name="table name")&lt;/P&gt;&lt;P&gt;def target:&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Custom spark code to create history&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Even though the spark code creates and prints history when i use it in normal notebook, but when I run it inside the pipeline, it does not create history, it just writes most recent record somehow.&lt;/P&gt;&lt;P&gt;Can someone tell me how exactly is this happening, and what I can improve, please!&lt;/P&gt;</description>
      <pubDate>Thu, 01 May 2025 17:10:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117431#M45494</guid>
      <dc:creator>_singh_vish</dc:creator>
      <dc:date>2025-05-01T17:10:25Z</dc:date>
    </item>
    <item>
      <title>Re: Working of @DLT.table</title>
      <link>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117436#M45497</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/163028"&gt;@_singh_vish&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;DLT assumes the result of each @dlt.table decorator is the current state of the table at that point in time. So, when you define a DLT table using @dlt.table, whatever DataFrame is returned by that function will replace the previous data unless your logic is carefully implemented to retain historical records.&lt;BR /&gt;So even though your custom Spark logic prints or calculates history in a notebook, DLT will overwrite the target table unless your transformation explicitly preserves the history in the returned DataFrame.&lt;/P&gt;&lt;P&gt;You need to union your new records with existing table data within the transformation itself. Here's an outline of how you can do it:&lt;BR /&gt;&lt;STRONG&gt;Option 1: Append-based logic inside @dlt.table&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;import dlt&lt;BR /&gt;from pyspark.sql.functions import current_timestamp&lt;/P&gt;&lt;P&gt;@dlt.table(name="my_table_history")&lt;BR /&gt;def create_history():&lt;BR /&gt;new_data = ... # your custom spark code to fetch current records&lt;/P&gt;&lt;P&gt;try:&lt;BR /&gt;existing_data = dlt.read("my_table_history")&lt;BR /&gt;# Logic to identify new/changed rows&lt;BR /&gt;combined = existing_data.unionByName(&lt;BR /&gt;new_data.withColumn("ingest_time", current_timestamp()), allowMissingColumns=True&lt;BR /&gt;)&lt;BR /&gt;except Exception:&lt;BR /&gt;# First run, table might not exist yet&lt;BR /&gt;combined = new_data.withColumn("ingest_time", current_timestamp())&lt;/P&gt;&lt;P&gt;return combined&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Option 2: Use @dlt.append instead of @dlt.table&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;If your goal is to append every batch to a history table, you can use @dlt.append instead:&lt;/P&gt;&lt;P&gt;@dlt.append(table_name="my_table_history")&lt;BR /&gt;def insert_history():&lt;BR /&gt;return your_spark_logic_df.withColumn("ingest_time", current_timestamp())&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 01 May 2025 17:45:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117436#M45497</guid>
      <dc:creator>lingareddy_Alva</dc:creator>
      <dc:date>2025-05-01T17:45:05Z</dc:date>
    </item>
    <item>
      <title>Re: Working of @DLT.table</title>
      <link>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117534#M45519</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/24053"&gt;@lingareddy_Alva&lt;/a&gt;&amp;nbsp;Thanks for the help.&lt;/P&gt;</description>
      <pubDate>Fri, 02 May 2025 16:47:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/working-of-dlt-table/m-p/117534#M45519</guid>
      <dc:creator>_singh_vish</dc:creator>
      <dc:date>2025-05-02T16:47:44Z</dc:date>
    </item>
  </channel>
</rss>

