Handling large volumes of streamed transactional data using DLT
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2024 08:42 AM
We have a data stream from event hub with approximately 10 million rows per day (into one table) - these records are insert only (no update). We are trying to find a solution to aggregate / group by the data based on multiple data points and our requirements include
- max 30 second delay from event hub to gold table
- comparing current hour to last years max hour based on various parameters (e.g. total transactions, sum of amount, etc.)
We are thinking of using DLT to process live transactions (retaining only current days data in DLT tables) and parallelly moving the data to an archive set of tables on a nightly/periodic basis and re-aggregate and expose as a second dataset, and use both sources to achieve comparison between live and historical data. We have not found similar industry examples and would like to know the positive and negatives of our approach or of an alternate approach.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2024 06:19 AM
Hi, please find below a set of resources I believe relevant for you.
Success stories
You can find the success stories of companies leveraging the streaming on Databricks here.
Videos
- Introduction to Data Streaming on the Lakehouse : Structured Streaming and DLT overview with a customer's feedback
- Embracing the Future of Data Engineering: The Serverless, Real-Time Lakehouse in Action : an end2end example on how to process a lot of data from many sensors with DLT.
Demos
You can review notebooks by clicking on View the notebooks button or install the demo in your workspace to play with) :
Hope it helps,
Best,

