So I'm used to developing notebooks interactively. Write some code, run to see if I made an error and if no error, filter and display dataframe to see that I did what I intended. With DLT pipelines, however, I can't run interactively.
Is my understanding correct that, to develop a DLT pipeline, I should first develop a notebook interactively, and then AFTER everything works, put the DLT decorators all around the code before creating a DLT pipeline? Is my understanding correct?
To me it seems like a big hassle to develop in this way, especially if an error occurs and I have to debug the pipeline. I would then have to remove the DLT decorators again before running interactively. Perhaps using two side-by-side notebooks can alleviate these issues, where one has the interactive code, and the other imports the interactive and applies DLT decorators, dlt.read etc? I think that may work.
If someone can give me some pointers on how to develop and maintain DLT pipelines in practice I'd be super grateful. I feel like I'm missing some selling points.