Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-25-2021 03:34 PM
The delta live tables code in a notebook is essentially a template for how you want your data transformed. The Pipelines feature is the execution/operationlization component for that notebook.
With respect to being able to test the notebook, one option could be to setup up a Dev/Test Pipeline that lands the data in a dev/test target database run validation queries either using a different databricks notebook or Databricks SQL and then push it to production with a different pipeline that send data to a production target database