2 Things to check
One:
double-check that you are not trying to authenticate with two different methods (e.g., a cluster credential trying to override the Unity Catalog creds)
The previous Hive setup likely relied on a Cluster-Scoped Service Principal or Shared Access Signature (SAS) key configured directly in the cluster's Spark configuration (e.g., spark.hadoop.fs.azure.account.auth.type). Unity Catalog ignores these cluster-scoped secrets for paths defined in its External Locations. If the table is an External Table managed by Unity Catalog, you must rely on the credentials defined in the External Location.
Two:
Are you not using Autloader?
If the Dataverse stream creates many very small files or is currently in the process of writing/overwriting a file when Spark tries to read it, it can cause transient read failures.
Use Auto Loader if possible, as they handle file discovery and eventual consistency better.
RG #Driving Business Outcomes with Data Intelligence