@Retired_mod - Thank you for your response. There is no change in the table dependencies.
The code to create the individual raw tables look like this: The input to this is always the same 40 tables with only the underlying parquet file changing. I cant understand why it creates 40 tables in the first run and then only 2 tables in the second run.
def CreateTable(tableSchema,tableName, tableFilePath):
schemaTableName = 'test_dlt_'+tableName.lower()
@dlt.table(
name= schemaTableName,
comment="Raw data capture for " + tableName,
table_properties={
"quality": "bronze",
"pipelines.autoOptimize.managed": "true"
}
)
def create_live_table():
return (
(spark.read.format("parquet").load(tableFilePath))
)