cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

IM_01
by Contributor
  • 283 Views
  • 4 replies
  • 0 kudos

how to use rules dynamically in LDP

HiI see there is a way to store rules in table & use them in python while implementing LDPs how to use the generate/ read rules dynamically in SQL way of implementing LDPs. Could you please help me with this#DLT

  • 283 Views
  • 4 replies
  • 0 kudos
Latest Reply
SteveOstrowski
Databricks Employee
  • 0 kudos

Hi @IM_01, The feature you are looking for, storing data quality rules in a table and applying them dynamically, is fully supported in Lakeflow Spark Declarative Pipelines (SDP) through the Python API. Unfortunately, there is currently no equivalent ...

  • 0 kudos
3 More Replies
jeremy1
by New Contributor II
  • 21532 Views
  • 9 replies
  • 7 kudos

DLT and Modularity (best practices?)

I have [very] recently started using DLT for the first time. One of the challenges I have run into is how to include other "modules" within my pipelines. I missed the documentation where magic commands (with the exception of %pip) are ignored and was...

  • 21532 Views
  • 9 replies
  • 7 kudos
Latest Reply
Greg_Galloway
New Contributor III
  • 7 kudos

I like the approach @Arvind Ravish​ shared since you can't currently use %run in DLT pipelines. However, it took a little testing to be clear on how exactly to make it work. First, ensure in the Admin Console that the repos feature is configured as f...

  • 7 kudos
8 More Replies
aladda
by Databricks Employee
  • 2143 Views
  • 1 replies
  • 0 kudos
  • 2143 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

Delta Live Table supports the data quality checks via expectations. On encountering invalid records you can choose to either retain them, drop them or fail/stop the pipeline. See the link below for additional detailshttps://docs.databricks.com/data-e...

  • 0 kudos
Labels