Got a very simple DLT which runs fine, but the final table "a" is missing data.
I've found that after goes through a full refresh, if I rerun just the final table, then I get more records (from 1.2m to 1.4m) and the missing data then comes back.
When I run the last bit of the code in a notebook manually then I do get all the records, so I don't know where the 1.2m comes from. I feel like it's somehow reading an older version of the r and c tables, but that shouldn't be happening if it's a full refresh right?
I've also noticed that I get the following warning:
Your query 'r' reads from '<catalog name>.<schema name>.p' but must read from 'LIVE.p' instead. Always use the LIVE keyword when referencing tables from the same pipeline so that DLT can track the dependencies in the pipeline.
But in the code in my notebook I already use LIVE. and don't reference any catalog or schema.