Hello,
We are creating an application to read data from Kafka topic send by a source. After we get the data, we do some transformations and send to other Kafka topic. In this process source may send same data twice.
Our questions are
1. How can we control duplications and only send the updated data to target Kafka topic?
2. Where and what format should we store the data in Databricks to check for duplicates?
Thank You,
Dheeraj