cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Handling large volumes of streamed transactional data using DLT

alano
New Contributor

We have a data stream from event hub with approximately 10 million rows per day (into one table) - these records are insert only (no update). We are trying to find a solution to aggregate / group by the data based on multiple data points and our requirements include

  • max 30 second delay from event hub to gold table
  • comparing current hour to last years max hour based on various parameters (e.g. total transactions, sum of amount, etc.)

We are thinking of using DLT to process live transactions (retaining only current days data in DLT tables) and parallelly moving the data to an archive set of tables on a nightly/periodic basis and re-aggregate and expose as a second dataset, and use both sources to achieve comparison between live and historical data. We have not found similar industry examples and would like to know the positive and negatives of our approach or of an alternate approach.

1 REPLY 1

artsheiko
Valued Contributor III
Valued Contributor III

Hi, please find below a set of resources I believe relevant for you.

Success stories

You can find the success stories of companies leveraging the streaming on Databricks here.

Videos

  1. Introduction to Data Streaming on the Lakehouse : Structured Streaming and DLT overview with a customer's feedback
  2. Embracing the Future of Data Engineering: The Serverless, Real-Time Lakehouse in Action : an end2end example on how to process a lot of data from many sensors with DLT.

Demos

You can review notebooks by clicking on View the notebooks button or install the demo in your workspace to play with) :

  1. Unit Testing Delta Live Tables (DLT) for Production-Grade Pipelines
  2. Spark Streaming

Hope it helps,

Best,