Why can't we just copy all the DLT tables and materialized views from one UC catalog to another to get the historical data in place and then run the DLT pipelines on those UC tables?
We are migrating many very large tables from our TEST catalog to our PROD catalog. They are all tables generated by and owned by DLT processes. It is expensive to reprocess our entire dataset. Its a big dataset. So we would like to avoid it.
If it is not possible to do this, and the tables do indeed have to have "Full Refresh All" performed, can someone tell me exactly why? I realize it has something to do with how the DLT API interfaces with the underlying UC table metadata, but I need specifics about what exactly is going on under the hood. Can anyone help me?