โ12-09-2021 12:45 PM
Hi,
I want to keep track of the streaming lag from the source table, which is a delta table.
I see that in query progress logs, there is some information about the last version and the last file in the version for the end offset, but this don't give the lag from the source table, unless I query it and check what the last version and files count is.
"sources" : [ {
"description" : "DeltaSource[dbfs:/mnt/defaultDatalake/zones/bronze/my_source_table]",
"startOffset" : {
"sourceVersion" : 1,
"reservoirId" : "15059b8a-0f48-4561-9424-8fcb0c8906de",
"reservoirVersion" : 39673,
"index" : -1,
"isStartingVersion" : false
},
"endOffset" : {
"sourceVersion" : 1,
"reservoirId" : "15059b8a-0f48-4561-9424-8fcb0c8906de",
"reservoirVersion" : 39674,
"index" : -1,
"isStartingVersion" : false
},
Just to be clear, by lag I mean, that if for example the source table has the last row 100 and the streaming is now processing row 90, my lag would be 10 from the source table.
One more technical point: how can I parse the startOffset and endOffset. From the `SourceProgress` class I have direct access to the endOffset field, but not to its inners fields (like index). Should I just parse the endOffset string as json using some standard json library like jackson or ujson?
Thank you very much.
โ01-25-2022 04:43 PM
Hi @Yerachmiel Feltzmanโ ,
You will need to take a look at the micro-batch metrics. This article will explain more what each metric means https://databricks.com/blog/2020/07/29/a-look-at-the-new-structured-streaming-ui-in-apache-spark-3-0...
โ12-10-2021 12:12 AM
Thanks, Kaniz. This is a highly important question for some production jobs we have (and we are highly invested in Databricks and Delta). I have seen others through the internet asking the same question, as well.
Thank you.
Yerachmiel Feltzman | Data Platform Developer
โ12-10-2021 03:05 PM
Hi @Yerachmiel Feltzmanโ
You can take a look at the following metrics https://docs.databricks.com/delta/delta-streaming.html#metrics in your stream query progress
โ12-12-2021 01:09 AM
Hi, @Jose Gonzalezโ , I don't see in the link something that states the lag from the source delta table.
Thanks anyway.
โ12-13-2021 09:24 AM
Hi @Yerachmiel Feltzmanโ
Are you able to see these metrics?
{
"sources" : [
{
"description" : "DeltaSource[file:/path/to/source]",
"metrics" : {
"numBytesOutstanding" : "3456",
"numFilesOutstanding" : "8"
},
}
]
}
โ12-13-2021 09:51 AM
I am.
Yerachmiel Feltzman | Data Platform Developer
โ12-28-2021 08:15 AM
@Yerachmiel Feltzmanโ - Does the fact you can see the metrics resolve the issue?
โ01-03-2022 01:33 AM
Hi, @Piper Wilsonโ . Those metrics don't solve the issue. What I am asking for is to keep tracking the lag between the running streaming and the source delta table. Those metrics give me a lot of information, like input rates (as bytes and as files) and some other metrics, but lag from the source table I don't see there.
Again, by lag, I mean "the difference between the source table last record position/timestamp and the current row my streaming is processing". For example (simplistic), the source table has 100 rows and the streaming has processed 90, it is lagging 10 rows.
What I have in mind is something similar to Kafka's consumer lag, but another approach is welcome as well. The whole point here is: "How do I keep track and know where my streaming is regarding its source? How can I know if it is processing slow than expected, ie, slower than its source delta table?"
Thanks. ๐
โ01-25-2022 04:43 PM
Hi @Yerachmiel Feltzmanโ ,
You will need to take a look at the micro-batch metrics. This article will explain more what each metric means https://databricks.com/blog/2020/07/29/a-look-at-the-new-structured-streaming-ui-in-apache-spark-3-0...
โ05-12-2022 06:44 AM
Hey @Yerachmiel Feltzmanโ
I hope all is well.
Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.
Thanks!
โ05-15-2022 05:53 AM
Hey @Vartika Nainโ . The issue was resolved. Thanks.
โ08-18-2024 11:40 AM - edited โ08-18-2024 11:43 AM
Hello, How did you solve this problem?
Could you kindly share it with me? I have the same problem.
I also want to check more details.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group