@fhameed The error occurs if the Iceberg metadata written by Snowflake does not match the number of files in object storage. When attempting to read the table in Databricks, there is a verification process that checks to see if the Iceberg metadata matches exactly the physical files in storage, and if it fails, it throws the error. The issue can occur if the iceberg metadata is corrupted. You can try setting the below configs in classic compute to skip the checksum validation and see if you can query the table.
SET spark.databricks.delta.checksum.mismatch.fatal = false
SET spark.databricks.delta.uniform.ingress.refreshChecksumValidation.enabled = false