Hi @aniruth1000 ,
When using delta live table pipelines, only the source table can be the delta table.
The target table must be fully managed by the DLT pipeline, including its creation and lifecycle.
Let's say that you modified the code as suggested by @gchandra, and your code looks like below:
import dlt
from pyspark.sql.functions import col
@dlt.table(name="source_table_dlt")
def source_table():
return (
spark.read.format("delta").table("table_latest")
)
dlt.create_streaming_table("table_old")
dlt.apply_changes(
target = "target_old",
source = "source_table_dlt",
keys=["id"],
sequence_by= col("import_date"))
The requirement is that target is not an existing delta table not created by DLT.
If a table the given name (target_old) name already exists as a managed Delta table (not created and managed by DLT), DLT will throw an error because it cannot take over the management of an existing managed table not created by it. This is what is happening in your case.
How to solve it?
The requirements:
1. Your target table will be loaded with data from "table_latest" on a regular basis
2. Your target table must also contain data from "table_old"
The steps:
1. Create a dlt pipeline as above
2. Change the target table to a different table name, like "table_target"
3. Run a one-time data-backfill from table_old as described in the docs.