that depends on the query, the table and what optimize you use (binning, z-order).
Delta lake by default collects statistics for the first 32 columns (which can be changed).
Building statistics for long strings is also more expensive than f.e. for integers.
Then there is also the fact that evaluating numbers is faster than strings.
https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/language-manual/delta-copy-...
What could also play is auto scaling on your cluster, or spot instances which are abandoned etc.
So, not easy to pinpoint the difference.