cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Structured streaming - missing records in Gold layer, the foreach batch doesn't write some data

Kristin
New Contributor

Good afternoon,Spark,Streaming,Delta,Gold

I'm facing an issue with the foreach batch function in my streaming pipeline. The pipeline is fetching data from the data lake storage using Autoloader. This data is first written to a bronze layer. Following this, I explode the JSON files and store the result in the silver layer (structured streaming). From the silver layer, I extract only the required data, perform a join operation with the static table, and then write these changes to the gold layer using the foreach batch function.

I have activated the change feed on the gold layer, and these changes are subsequently written to Cosmos DB.

The problem I'm encountering is that certain change events aren't getting written to the gold layer. It appears that this issue arises in a specific scenario: when there hasn't been a change made yet (for example today), the subsequent change will be written. However, if another change follows within the same day, it doesn't get recorded.

Below are the details of my merge function and the delta log. While the batch processing seems to be running (as indicated by the delta log history), the expected changes aren't reflected in the output.

Kristin_0-1698065953782.png

 

 

 

 

# Function to upsert microBatchOutputDF into Delta table using merge
def upsertToDelta(microBatchOutputDF, batchId):
  #print(microBatchOutputDF)
  (targetDF.alias("t").merge(
    microBatchOutputDF
      .alias("s"),
      .withColumn('row', row_number().over(Window.partitionBy("PARTNER_ID").orderBy("ANSCHRIFT")))
      .filter(col('row') == 1)
      .drop('row'),
    "s.PARTNER_ID = t.PARTNER_ID")
   .whenMatchedUpdate(set =
    {
      "t.LAST_NAME": "s.LAST_NAME",
      "t.FIRST_NAME": "s.FIRST_NAME",
      "t.ANSCHRIFT": "s.ANSCHRIFT",
      "t.POST_CODE": "s.POST_CODE",
      "t.CITY": "s.CITY",
      "t.STREET": "s.STREET",
      "t.HOUSE_NUM1": "s.HOUSE_NUM2",
      "t.ADDRESS_TYPE": "s.ADDRESS_TYPE",
      "t.PARTNER_TYPE": "s.PARTNER_TYPE",
      "t.EMAILS": "s.EMAILS",
      "t.TEL": "s.TEL",
      "t.BANK_DETAILS": "s.BANK_DETAILS",
      "t.id": "s.id",
      "t.document_type": "s.document_type",
      "t.last_update": "s.last_update"
    }
  )
    .whenNotMatchedInsertAll()
    .execute()
  )


def writeStreamGold(sourceDF):
  checkpoint = '/mnt/gold/business_partner'+ "/checkpoint"
  # Write the output of a streaming aggregation query into Delta table
  query = (sourceDF.writeStream
   .format("delta")
  .foreachBatch(upsertToDelta).option("checkpointLocation", checkpoint)
  .outputMode("append")
  .queryName(f"toGold_GP").start()
  )

  return query

 

 

 

0 REPLIES 0

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group