- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2024 10:32 AM
I have just find out spark structured streaming do not commit offset to kafka but use its internal checkpoint system and that there is no way to visualize its consumption lag in typical kafka UI
Lag being important in stream processing, I can't imagine that the community did not come up with workaround to help with consumer lag tracking, but so far i could not find any out of the box solutions.
In any case, as i do not want to reinvent the will, I won't if anyone can share solution either out of the box or custom that people typically uses for this ?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2024 01:20 PM - edited 07-11-2024 01:22 PM
Hi @Maatari ,
In spark structured streaming, current offset information is written to checkpoint files continuously. You can create piece of code that will extract information from checkpoint files aobut currently consumed offset, extract offset from Kafka and compare it.
As an example, look at below article. Unfortunately, I don't know anything about out of the box solution for this kind of problem.
PS. There is Kafka offset commiter for Spark Structured Streaming, but last commits are from 4 years ago 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-19-2024 05:20 AM
Hi @Maatari ,
Thank you for reaching out to our community! We're here to help you.
To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback not only helps us assist you better but also benefits other community members who may have similar questions in the future.
If you found the answer helpful, consider giving it a kudo. If the response fully addresses your question, please mark it as the accepted solution. This will help us close the thread and ensure your question is resolved.
We appreciate your participation and are here to assist you further if you need it!
Thanks,
Rishabh

