cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Adding deduplication method to spark streaming

patojo94
New Contributor II

Hi everyone, I am having some troubles to add a deduplication step on a file streaming that is already running. The code I am trying to add is this one:

df = df.withWatermark("arrival_time", "20 minutes")\
.dropDuplicates(["event_id", "arrival_time"])

However, I am getting the following error.

Caused by: java.lang.IllegalStateException: Error reading streaming state file of HDFSStateStoreProvider[id = (op=0,part=101),dir = dbfs:/mnt/checkpoints/silver_events/state/0/101]: dbfs:/mnt/checkpoints/silver_events/state/0/101/1.delta does not exist. If the stream job is restarted with a new or updated state operation, please create a new checkpoint location or clear the existing checkpoint location.

MI two questions are:

  1. Why I am getting this error and what does it mean?
  2. Is it really possible to delete a streamings checkpoint and not get duplicated data when restarting the streaming?

Thank you!

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager

Hi @patricio tojo​ , You can de-duplicate records in data streams using a unique identifier in the events. This is exactly the same as deduplication on static using a unique identifier column. The query will store the necessary amount of data from previous records such that it can filter duplicate records. Similar to aggregations, you can use deduplication with or without a watermark.

  • With watermark - If there is an upper bound on how late a duplicate record may arrive, then you can define a watermark on an event time column and de-duplicate using both the guid and the event time columns. The query will use the watermark to remove old state data from past records that are not expected to get any duplicates anymore. This bounds the amount of the state the query has to maintain.

  • Without watermark - Since there are no bounds on when a duplicate record may arrive, the query stores the data from all the past records as state.

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#streaming-deduplica...

Solution 1.

Solution 2.

View solution in original post

2 REPLIES 2

Kaniz
Community Manager
Community Manager

Hi @patricio tojo​ , You can de-duplicate records in data streams using a unique identifier in the events. This is exactly the same as deduplication on static using a unique identifier column. The query will store the necessary amount of data from previous records such that it can filter duplicate records. Similar to aggregations, you can use deduplication with or without a watermark.

  • With watermark - If there is an upper bound on how late a duplicate record may arrive, then you can define a watermark on an event time column and de-duplicate using both the guid and the event time columns. The query will use the watermark to remove old state data from past records that are not expected to get any duplicates anymore. This bounds the amount of the state the query has to maintain.

  • Without watermark - Since there are no bounds on when a duplicate record may arrive, the query stores the data from all the past records as state.

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#streaming-deduplica...

Solution 1.

Solution 2.

Kaniz
Community Manager
Community Manager

Hi @patricio tojo​ , We haven’t heard from you on the last response from me, and I was checking back to see if you have a resolution yet. If you have any solution, please share it with the community as it can be helpful to others. Otherwise, we will respond with more details and try to help.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.