Hi there,
This was my previous approach
- I had a databricks notebook with a streaming table bronze level reading data from volumes which created a 2 downstream tables.
- 1st A a materialised view gold level, another a table for storing ingestion_metadata like most_recent_timestamp of events.
- ingestion metadata table was shared using open delta sharing
Then I would run an ingestion script from aws ecs / locally - I read the ingestion_metadata with delta sharing and then find the most_recent_timestamp log and fetch further data ahead from that timestamp and then other stuffs.This is was working good only issue - I needed to run the databricks notebook manually.
So I shifted to dlt pipeline.
Things are same but I cannot create a normal table either it will be a streaming table / materialised view / view
and mv and views cannot be delta shared.So then I tried to access that ingestion_metadata materialised table created in dlt pipeline using databricks API but I cannot read the actual data inside it.
How to do this or any other way I should solve this case