Now we are moving our DLT Pipelines into production, we would like to start looking at unit testing the transformation logic inside DLT notebooks.
We want to know how we can unit test the PySpark logic/transformations independently without having to spin up a DLT pipeline. Mainly because you can run a DLT notebook and it will output saying it's fine and to create a pipeline, but when you run the pipeline it will then throw the actual errors associated with things like incorrect schema locations etc. It's also hard to debug transformations within DLT as you can't readily inspect inputs/outputs or add debug logic.
Does anyone have any guidance on suitable approaches towards unit testing DLT pipeline notebooks? Thanks