saurabh18cs
Honored Contributor III

Hi @Mey I would suggest you to use Lakeflow suite(lakeflow connect , lakeflow declaratve pipelines, lakeflow jobs) pipelines from databricks to achieve this event driven incremental workload.Below example is using SQL but you can also use python. read_files is an example of using databricks natively support autoloader for incremental streaming.

example workflow:

saurabh18cs_0-1770717443254.pngsaurabh18cs_1-1770717633529.png

saurabh18cs_2-1770718093899.png

 

 

 

View solution in original post