cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

NativeADLGen2RequestComparisonHandler: Error in request comparison (when running DLT)

thomas-totter
New Contributor III

Since at least two weeks (but probably even longer) our DLT pipeline posts error messages to log4j (driver logs) like the one below. I tried with both channels (preview, current), switched between serverless and classic compute and started the pipeline in triggered as well as well as in continuous mode. However, the error messages keep coming in at a very high frequency (probably after streaming updates or similar triggers). Any leads as to why this happens (and maybe even how to solve it) would be very much appreciated!

There are no corresponding messages in DLTs event log and the pipeline execution is also not negatively affected by it, at least not that i could tell so far. Any leads as to why this suddenly happens (or how to prevent it) would be much appreciated!

Shortened error message from log4j:

25/08/29 10:21:45 ERROR NativeADLGen2RequestComparisonHandler: Error in request comparison
java.lang.NumberFormatException: For input string: "Fri, 29 Aug 2025 09:02:07 GMT"

Full version:

Spoiler
25/08/29 10:21:45 ERROR NativeADLGen2RequestComparisonHandler: Error in request comparison
java.lang.NumberFormatException: For input string: "Fri, 29 Aug 2025 09:02:07 GMT"
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67)
at java.base/java.lang.Long.parseLong(Long.java:711)
at java.base/java.lang.Long.parseLong(Long.java:836)
at scala.collection.immutable.StringLike.toLong(StringLike.scala:309)
at scala.collection.immutable.StringLike.toLong$(StringLike.scala:309)
at scala.collection.immutable.StringOps.toLong(StringOps.scala:33)
at com.databricks.sql.io.NativeADLGen2RequestComparisonHandler.doHandle(NativeADLGen2RequestComparisonHandler.scala:94)
at com.databricks.sql.io.NativeADLGen2RequestComparisonHandler.beforeAttempt(NativeADLGen2RequestComparisonHandler.scala:155)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:396)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:284)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:251)
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494)
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:249)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsClient.read(AbfsClient.java:1221)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readRemote(AbfsInputStream.java:670)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readInternal(AbfsInputStream.java:633)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readOneBlock(AbfsInputStream.java:423)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.read(AbfsInputStream.java:360)
at java.base/java.io.DataInputStream.read(DataInputStream.java:151)
at com.databricks.common.filesystem.LokiAbfsInputStream.$anonfun$read$3(LokiABFS.scala:210)
at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23)
at com.databricks.common.filesystem.LokiAbfsInputStream.withExceptionRewrites(LokiABFS.scala:200)
at com.databricks.common.filesystem.LokiAbfsInputStream.read(LokiABFS.scala:210)
at java.base/java.io.DataInputStream.read(DataInputStream.java:151)
at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:281)
at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:324)
at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:189)
at java.base/java.io.InputStreamReader.read(InputStreamReader.java:177)
at java.base/java.io.BufferedReader.fill(BufferedReader.java:162)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:329)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:396)
at com.databricks.sql.transaction.tahoe.storage.LineClosableIterator.hasNext(LineClosableIterator.scala:50)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at com.databricks.sql.transaction.tahoe.storage.LineClosableIterator.foreach(LineClosableIterator.scala:31)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.immutable.VectorBuilder.$plus$plus$eq(Vector.scala:668)
at scala.collection.immutable.VectorBuilder.$plus$plus$eq(Vector.scala:645)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at com.databricks.sql.transaction.tahoe.storage.LineClosableIterator.to(LineClosableIterator.scala:31)
at scala.collection.TraversableOnce.toIndexedSeq(TraversableOnce.scala:356)
at scala.collection.TraversableOnce.toIndexedSeq$(TraversableOnce.scala:356)
at com.databricks.sql.transaction.tahoe.storage.LineClosableIterator.toIndexedSeq(LineClosableIterator.scala:31)
at com.databricks.sql.transaction.tahoe.storage.LogStore.read(LogStore.scala:86)
at com.databricks.sql.transaction.tahoe.storage.LogStore.read$(LogStore.scala:83)
at com.databricks.tahoe.store.DelegatingLogStore.read(DelegatingLogStore.scala:38)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource$.actions$lzycompute$1(DeltaSource.scala:1812)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource$.actions$1(DeltaSource.scala:1811)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource$.com$databricks$sql$transaction$tahoe$sources$DeltaSource$$createClosableIterator$1(DeltaSource.scala:1819)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource$$anon$3.<init>(DeltaSource.scala:1827)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource$.createRewindableActionIterator(DeltaSource.scala:1826)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceMetadataEvolutionSupport.$anonfun$collectActions$2(DeltaSourceMetadataEvolutionSupport.scala:209)
at com.databricks.sql.transaction.tahoe.storage.ClosableIterator$IteratorFlatMapCloseOp$$anon$2.hasNext(ClosableIterator.scala:89)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:585)
at scala.collection.Iterator.toStream(Iterator.scala:1417)
at scala.collection.Iterator.toStream$(Iterator.scala:1416)
at scala.collection.AbstractIterator.toStream(Iterator.scala:1431)
at scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
at scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
at scala.collection.AbstractIterator.toSeq(Iterator.scala:1431)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceMetadataEvolutionSupport.$anonfun$collectMetadataActions$1(DeltaSourceMetadataEvolutionSupport.scala:289)
at com.databricks.sql.transaction.tahoe.storage.ClosableIterator.processAndClose(ClosableIterator.scala:43)
at com.databricks.sql.transaction.tahoe.storage.ClosableIterator.processAndClose$(ClosableIterator.scala:41)
at com.databricks.sql.transaction.tahoe.storage.ClosableIterator$IteratorFlatMapCloseOp$$anon$2.processAndClose(ClosableIterator.scala:78)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceMetadataEvolutionSupport.collectMetadataActions(DeltaSourceMetadataEvolutionSupport.scala:288)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceMetadataEvolutionSupport.collectMetadataActions$(DeltaSourceMetadataEvolutionSupport.scala:285)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.collectMetadataActions(DeltaSource.scala:804)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.$anonfun$checkReadIncompatibleSchemaChangeOnStreamStartOnce$3(DeltaSource.scala:627)
at scala.runtime.java8.JFunction1$mcVJ$sp.apply(JFunction1$mcVJ$sp.java:23)
at scala.Option.foreach(Option.scala:407)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.$anonfun$checkReadIncompatibleSchemaChangeOnStreamStartOnce$2(DeltaSource.scala:626)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.$anonfun$checkReadIncompatibleSchemaChangeOnStreamStartOnce$2$adapted(DeltaSource.scala:603)
at scala.Option.foreach(Option.scala:407)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.checkReadIncompatibleSchemaChangeOnStreamStartOnce(DeltaSource.scala:603)
at com.databricks.sql.transaction.tahoe.sources.DeltaSourceBase.checkReadIncompatibleSchemaChangeOnStreamStartOnce$(DeltaSource.scala:582)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.checkReadIncompatibleSchemaChangeOnStreamStartOnce(DeltaSource.scala:804)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.validateAndInitMetadataLogForPlannedBatchesDuringStreamStart(DeltaSource.scala:1464)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.$anonfun$getBatch$1(DeltaSource.scala:1362)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.withOperationTypeTag(DeltaLogging.scala:325)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.withOperationTypeTag$(DeltaLogging.scala:312)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.withOperationTypeTag(DeltaSource.scala:804)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperationInternal$2(DeltaLogging.scala:178)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:418)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:416)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.recordFrameProfile(DeltaSource.scala:804)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperationInternal$1(DeltaLogging.scala:177)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:510)
at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:616)
at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:643)
at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)
at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:293)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:289)
at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)
at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:30)
at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:96)
at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:77)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:30)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:611)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:519)
at com.databricks.spark.util.PublicDBLogging.recordOperationWithResultTags(DatabricksSparkUsageLogger.scala:30)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:511)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:475)
at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:30)
at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:120)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:210)
at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:78)
at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:65)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:169)
at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:537)
at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:516)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.recordOperation(DeltaSource.scala:804)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperationInternal(DeltaLogging.scala:176)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation(DeltaLogging.scala:166)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation$(DeltaLogging.scala:155)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.recordDeltaOperation(DeltaSource.scala:804)
at com.databricks.sql.transaction.tahoe.sources.DeltaSource.getBatch(DeltaSource.scala:1314)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$populateStartOffsets$4(MicroBatchExecution.scala:989)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:27)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:986)
at org.apache.spark.sql.execution.streaming.MultiBatchRollbackSupport.populateStartOffsetsWithRollbackHandling(MultiBatchRollbackSupport.scala:125)
at org.apache.spark.sql.execution.streaming.MultiBatchRollbackSupport.populateStartOffsetsWithRollbackHandling$(MultiBatchRollbackSupport.scala:85)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsetsWithRollbackHandling(MicroBatchExecution.scala:81)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.initializeExecution(MicroBatchExecution.scala:559)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStreamWithListener(MicroBatchExecution.scala:698)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:475)
at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$2(StreamExecution.scala:450)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1462)
at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:389)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)
at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:293)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:289)
at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)
at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:30)
at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:96)
at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:77)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:30)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:124)
at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:232)
at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789)
at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668)
at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666)
at org.apache.spark.sql.execution.streaming.StreamExecution.withAttributionTags(StreamExecution.scala:87)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:369)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$3(StreamExecution.scala:287)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$2(StreamExecution.scala:287)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)
at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:109)
at scala.util.Using$.resource(Using.scala:269)
at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:108)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:286)
5 REPLIES 5

AlessandroM
New Contributor II

We started experiencing this as well in a streaming job using runtime 16.4.8

It looks like some part of the logic at com.databricks.sql.io.NativeADLGen2RequestComparisonHandler.doHandle is expecting a Long, but it is receiving a Date/String instead.

thomas-totter
New Contributor III

I also tried below setting below (spark.conf), but that didn't help either:

spark.sql.legacy.timeParserPolicy: LEGACY

LEGACY_TIME_PARSER_POLICY - Azure Databricks - Databricks SQL | Microsoft Learn

@thomas-totter that configuration will influence how to parse timestamp columns that are in your data, unfortunately it will not change the internal logic used by Databricks/Azure to fetch timestamp metadata from an object storage system.

This logic is part of the Databricks Runtime and outside of users' control I am afraid.

@AlessandroM Thank you! I actually knew what the original purpose of the setting is - for lack of other ideas i gave it a try nonetheless. But you are most likely right and it's up to Databricks to fix this....

mark_ott
Databricks Employee
Databricks Employee

The error message you are observing in your DLT pipeline logs, specifically:

text
java.lang.NumberFormatException: For input string: "Fri, 29 Aug 2025 09:02:07 GMT"

suggests that something in your pipeline (likely library or code responsible for Azure Data Lake Gen2 (ADL Gen2) operations) is attempting to parse a date string as a numeric value, such as a timestamp or epoch time, and failing.

Root Cause

  • The error originates from the NativeADLGen2RequestComparisonHandler, part of the (likely Databricks/Spark) library that talks to Azure Data Lake Gen2.

  • The handler is expecting a numeric value (usually, a Unix timestamp, e.g., 1693296000), but it's receiving a formatted date string, e.g., "Fri, 29 Aug 2025 09:02:07 GMT".

Why is this happening now?

  • Library Update or Backend Change: The format of the value returned (or logged) may have changed either due to a code/library update or a backend change on Microsoft/Azure's side.

  • Misconfigured Pipeline or Upstream Data Issue: If any feature in your pipeline switches format or passes metadata with invalid types, it can also cause this type of error.

  • External API/Response Change: If ADL Gen2 or some middleware changed how it formats headers or metadata (for instance, Last-Modified or similar fields), this could result in the current code being unable to handle the new format.

Why execution is unaffected

  • This appears to be a logging or comparison-related issue, where the function is intended for debug/logging or non-essential request validation. It catches and logs the error but does not bubble it up or halt processing.

  • The error might occur after streaming "triggers" or update cycles, explaining the high frequency.

How to Fix or Mitigate

Immediate Workarounds:

  • Since the error doesn't break functionality, you may continue unaffected, though frequent logging can obscure real issues or fill up logs quickly.

  • If possible, reduce the log level for this handler in your log4j configuration to avoid clutter in your logs.

Long-term Solutions:

  • Check for library updates: Make sure your Databricks, Spark, or any custom connector libraries for ADL Gen2 are up to date. Recent versions may have patched this issue if it’s a known bug.

  • Raise a support ticket: If using a managed service like Databricks, raise a ticket with them, quoting the handler name and error. They may have knowledge of recent changes.

  • Check pipeline config and metadata: Make sure that all fields, especially those involving timestamps or modification dates, are passed in the correct expected format.

  • Review release notes for Spark, Databricks Runtime, and Azure ADLS SDKs for any breaking changes related to date/time handling in the past few months.

Additional Notes

  • If you're using custom code/logic for ADLS file interactions, audit any places where you serialize or deserialize timestamps.

  • If this is strictly happening after certain DLT operations, consider temporarily disabling streaming tasks or checkpointing to see if the error stops.

This is a known class of error during changes in serialization/deserialization of metadata fields across cloud storage SDKs. Ensuring version compatibility and reporting to your cloud provider can help resolve it at the root if it's a backend or SDK bug.